Stop using natural language interfaces

(tidepool.leaflet.pub)

64 points | by steveklabnik 5 hours ago

12 comments

  • your_friend 59 minutes ago
    I think text interface sucks, but at the same time I like how Claude code solve that with questionnaires, I think that’s the most elegant solution to get a lot of valuable context from users in a fast way
    • ozim 16 minutes ago
      You can still have “chat interface” but if you use it for specialized applications you can do better than that.

      If I can do some actions with a press of a button that runs code or even some LLM interaction without me having to type that’s so much better.

      Feedback interface with plain text is awful, would be much better if there is anything that I have to repeat or fix on my end standing out - or any problem that LLM is looping over quickly discoverable.

  • renegade-otter 1 hour ago
    My boss used to say: "there is an easy way and there is the cool way".

    We no longer have StackOverflow. We no longer have Google, effectively.

    I used to be able to copy pasta code with incredible speed - now all of that is gone.

    Chatbots is all we have. And they are not that bad at search, with no sponsored results to weed through. For now.

  • nottorp 12 minutes ago
    Let's go further. Why not have a well specified prompt programming language for LLMs then?
  • itmitica 11 minutes ago
    Is this a bad bait or is it a bad post? I can't decide.
  • rurban 2 hours ago
    Of course not. Users love the chatbot. It's fast and easier to use than manually searching for answers or sticking together reports and graphs.

    There is no latency, because the inference is done locally. On a server at the customer with a big GPU

  • littlestymaar 5 minutes ago
    The latency argument is terrible. Of course frontier LLMs are slow and costly. But you don't need Claude to drive a natural language interface, and an LLM with less than 5B parameters (or even <1B) is going it be much faster than this.
  • rock_artist 1 hour ago
    The post suggests how to optimize the LLM text with UI elements that reduce the usage of pure/direct prompts.

    And that’s perfectly fine.

    Though the title in that sense is more of a click-bait.

  • kami23 3 hours ago
    Love this, this is what I have been envisioning as a LLM first OS! Feels like truly organic computing. Maybe Minority Report figured it out way back then.

    The idea of having the elements anticipated and lowering the cognitive load of searching a giant drop down list scratches a good place in my brain. Instantly recognize it as such a better experience than what we have on the web.

    I think something like this is the long term future for personal computing, maybe I'm way off, but this the type of computing I want to be doing, highly customized to my exact flow, highly malleable to improvement and feedback.

  • dhruv3006 3 hours ago
    This is something I agree with.Will be interesting to see if more and more people take this philosophy up.
  • SoftTalker 1 hour ago
    > just because we suddenly can doesn't mean we always should

    Author should take his own advice.

  • legostormtroopr 1 hour ago
    Unless I am wildly misreading this, this is actually worse that both GUIs and LLMs combined.

    LLMs offer a level of flexibility and non-determinism that allow them to adapt to different situations.

    GUIs offer precision and predictability - they are the same every time. Which means people can learn them and navigate them quickly. If you've ever seen a bank teller or rental car agent navigate a GUI or TUI they tab through and type so quickly because they have expert familliarity.

    But this - with a non-determinstic user interface generated by AI, every time a user engages with a UI its different. So they a more rigid UI but also a non-deterministic set of options every time. Which means instead of memorising what is in every drop down and tabbing through quickly, they need to re-learn the interface every time.

    • AlexCoventry 1 hour ago
      I don't think you have to use this if it's not working in your case. I think the idea is to try to anticipate the next few turns of the conversation, so you can pick the tree you want to go down in a fast way. If the prediction is accurate, I could see that being effective.
  • gigatexal 1 hour ago
    Yeah … no. It’s really nice interface. It’s here to stay.