I think text interface sucks, but at the same time I like how Claude code solve that with questionnaires, I think that’s the most elegant solution to get a lot of valuable context from users in a fast way
You can still have “chat interface” but if you use it for specialized applications you can do better than that.
If I can do some actions with a press of a button that runs code or even some LLM interaction without me having to type that’s so much better.
Feedback interface with plain text is awful, would be much better if there is anything that I have to repeat or fix on my end standing out - or any problem that LLM is looping over quickly discoverable.
The latency argument is terrible. Of course frontier LLMs are slow and costly. But you don't need Claude to drive a natural language interface, and an LLM with less than 5B parameters (or even <1B) is going it be much faster than this.
Love this, this is what I have been envisioning as a LLM first OS! Feels like truly organic computing. Maybe Minority Report figured it out way back then.
The idea of having the elements anticipated and lowering the cognitive load of searching a giant drop down list scratches a good place in my brain. Instantly recognize it as such a better experience than what we have on the web.
I think something like this is the long term future for personal computing, maybe I'm way off, but this the type of computing I want to be doing, highly customized to my exact flow, highly malleable to improvement and feedback.
Unless I am wildly misreading this, this is actually worse that both GUIs and LLMs combined.
LLMs offer a level of flexibility and non-determinism that allow them to adapt to different situations.
GUIs offer precision and predictability - they are the same every time. Which means people can learn them and navigate them quickly. If you've ever seen a bank teller or rental car agent navigate a GUI or TUI they tab through and type so quickly because they have expert familliarity.
But this - with a non-determinstic user interface generated by AI, every time a user engages with a UI its different. So they a more rigid UI but also a non-deterministic set of options every time. Which means instead of memorising what is in every drop down and tabbing through quickly, they need to re-learn the interface every time.
I don't think you have to use this if it's not working in your case. I think the idea is to try to anticipate the next few turns of the conversation, so you can pick the tree you want to go down in a fast way. If the prediction is accurate, I could see that being effective.
If I can do some actions with a press of a button that runs code or even some LLM interaction without me having to type that’s so much better.
Feedback interface with plain text is awful, would be much better if there is anything that I have to repeat or fix on my end standing out - or any problem that LLM is looping over quickly discoverable.
We no longer have StackOverflow. We no longer have Google, effectively.
I used to be able to copy pasta code with incredible speed - now all of that is gone.
Chatbots is all we have. And they are not that bad at search, with no sponsored results to weed through. For now.
There is no latency, because the inference is done locally. On a server at the customer with a big GPU
And that’s perfectly fine.
Though the title in that sense is more of a click-bait.
The idea of having the elements anticipated and lowering the cognitive load of searching a giant drop down list scratches a good place in my brain. Instantly recognize it as such a better experience than what we have on the web.
I think something like this is the long term future for personal computing, maybe I'm way off, but this the type of computing I want to be doing, highly customized to my exact flow, highly malleable to improvement and feedback.
Author should take his own advice.
LLMs offer a level of flexibility and non-determinism that allow them to adapt to different situations.
GUIs offer precision and predictability - they are the same every time. Which means people can learn them and navigate them quickly. If you've ever seen a bank teller or rental car agent navigate a GUI or TUI they tab through and type so quickly because they have expert familliarity.
But this - with a non-determinstic user interface generated by AI, every time a user engages with a UI its different. So they a more rigid UI but also a non-deterministic set of options every time. Which means instead of memorising what is in every drop down and tabbing through quickly, they need to re-learn the interface every time.