This feels completely speculative: there's no measure of whether this approach is actually effective.
Personally, I'm skeptical:
- Having the agent look up the JSON schemas and skills to use the CLI still dumps a lot of tokens into its context.
- Designing for AI agents over humans doesn't seem very future proof. Much of the world is still designed for humans, so the developers of agents are incentivized to make agents increasingly tolerate human design.
- This design is novel and may be fairly unfamiliar in the LLM's training data, so I'd imagine the agent would spend more tokens figuring this CLI out compared to a more traditional, human-centered CLI.
Yeah, people seem to forget one of the L's in LLM stands for Language, and human language is likely the largest chunk in training data.
A cli that is well designed for humans is well designed for agents too. The only difference is that you shouldn't dump pages of content that can pollute context needlessly. But then again, you probably shouldn't be dumping pages of content for humans either.
No. Nope. Agents do just fine with all sorts of CLIs. Old standards, new custom stuff, whatever.
The CLIs I’ve seen agents struggle with are those that wrap an enormous, unwieldy, poorly designed API under one namespace. All of Google Workspace apis, for example.
And for more persistent services, worth considering using varlink, for your agents sake and just if you need two cli thinks to chat.
https://varlink.org/
The systemd universe is moving this way from dbus, and there doesn't seem to be a ton of protest against giving up dbus for json over unix sockets. There's really not that many protocols that are super pleasant for conversing with across sockets.
Human DX optimizes for discoverability and forgiveness.
Agent DX optimizes for predictability and defense-in-depth.
These are different enough that retrofitting a human-first CLI for agents is a losing bet.
> The real question: what does it actually look like to build for this?
What was the not-so-real question? Or the surreal question?
I know it's becoming tiresome complaining of slop in HN. But folks! Put a bit of care in your writing! It is starting to look as if people had one more agent skill "write blogpost", with predictable results, as we are not a Python interpreter putting up with meh-to-disgusting code but actual humans with real lives and a sense of taste in communication
I love how AI gave the command-line and TUI interfaces a kind of Second Renaissance. It is not just AI that loves CLIs. It is especially blind people like me, who still use a lot of text-mode tools for their implicit accessibility. I gave codex a whirl recently, and hey! No accessibility problems at all. Just works. A few years back, that would have been released as a GUI-only program and would have locked me out completely[1]. A blessing that text oriented interaction is becoming important again!!!
1: Strictly speaking, there are ways to access some GUI programs on Linux with a screen reader. However, frankly, most are not really a joy to use. The speed of interaction I get from a TUI is simply unmatched. Whenever I work with a true GUI, no matter if Windows, Mac or Linux, it feels like I am trying to run away from a monster in a dream. I try to run, but all I manage to do is wobble about...
Personally, I'm skeptical:
- Having the agent look up the JSON schemas and skills to use the CLI still dumps a lot of tokens into its context.
- Designing for AI agents over humans doesn't seem very future proof. Much of the world is still designed for humans, so the developers of agents are incentivized to make agents increasingly tolerate human design.
- This design is novel and may be fairly unfamiliar in the LLM's training data, so I'd imagine the agent would spend more tokens figuring this CLI out compared to a more traditional, human-centered CLI.
A cli that is well designed for humans is well designed for agents too. The only difference is that you shouldn't dump pages of content that can pollute context needlessly. But then again, you probably shouldn't be dumping pages of content for humans either.
Why would I need to do that?
Someone else might ”want” me to do that but it’s not a ‘”need” I have.
The pattern I used was this:
1) made a docs command that printed out the path of the available docs
$ my-cli docs
- README.md
- DOC1.md
- dir2/DOC2.md
2) added a --path flag to print out a specific doc (tried to keep each doc less than 400 lines).
$ my-cli docs --path dir2/DOC2.md
# Contents of DOC2.md
3) added embeddings so I could do semantic search
$ my-cli search "how do I install x?"
[1] DOC1.md
"You can install x by ..."
[2] dir2/DOC2.md
"after you install..."
You then just need a simple skill to tell the agent about the docs and search command.
I actually love this as a pattern, it work really well. I got it to work with i18n too.
The CLIs I’ve seen agents struggle with are those that wrap an enormous, unwieldy, poorly designed API under one namespace. All of Google Workspace apis, for example.
Maybe asking agent to write/execute code that wraps CLI is a better solution.
Everything old is new again...
The systemd universe is moving this way from dbus, and there doesn't seem to be a ton of protest against giving up dbus for json over unix sockets. There's really not that many protocols that are super pleasant for conversing with across sockets.
I don't think this is true?
You want me to hand type a file name? I’ll flip a letter or skip one!
---
Human DX optimizes for discoverability and forgiveness. Agent DX optimizes for predictability and defense-in-depth. These are different enough that retrofitting a human-first CLI for agents is a losing bet.
If AI agents are so underdeveloped and useless that they can’t parse out CLI flags, then the answer is not to rewrite the CLI.
You either give the agents an API layer or you don’t use them because they’re not mature enough for the problem space.
Google Workspace CLI - https://news.ycombinator.com/item?id=47255881 - March 2026 (136 comments)
What was the not-so-real question? Or the surreal question?
I know it's becoming tiresome complaining of slop in HN. But folks! Put a bit of care in your writing! It is starting to look as if people had one more agent skill "write blogpost", with predictable results, as we are not a Python interpreter putting up with meh-to-disgusting code but actual humans with real lives and a sense of taste in communication
1: Strictly speaking, there are ways to access some GUI programs on Linux with a screen reader. However, frankly, most are not really a joy to use. The speed of interaction I get from a TUI is simply unmatched. Whenever I work with a true GUI, no matter if Windows, Mac or Linux, it feels like I am trying to run away from a monster in a dream. I try to run, but all I manage to do is wobble about...