13 comments

  • jampekka 1 hour ago
    The HN title is quite a strong claim, but it's nowhere to be seen in the repo.

    It seems to be fully prompt based, so the AI still can say anything it pleases.

    How well do these complicated prompt systems usually work? My strategy is to stick mostly to just simple prompts with potentially some deterministic tools and vendor harnesses, based on the rationale that these are what the models are trained and evaluated with. And that LLMs still often get tripped up when their context is spammed with too much stuff.

    • sigmoid10 1 hour ago
      The crazy thing is, you could do this. And it can be done 100% with code using zero prompting - just by limiting the output token set to a structured format and then further constraining parts of that to sources that were retrieved before. I know because I wrote such a system already. It could still match sources and answers incorrectly (just like this approach) but there is no need to rely on crazy prompts and agents to prevent hallucinations or missing outputs (which btw still lack any hard guarantees in the end). Prompting is a good strategy as models become smarter, but when you need reliability, you need to make use of the fact that they are still simple autoregressive completion engines. I don't get why everyone ignores this aspect, since I find it extremely useful all the time.
      • jampekka 1 hour ago
        > I don't get why everyone ignores this aspect, since I find it extremely useful all the time.

        My hunch is because structured/constrained decoding and deterministic subsystems are technically somewhat more involved, requiring e.g. raw API interactions and sometimes manual decoding strategies. Prompt systems can be written in plain text and mostly with "common sense". Not to say writing a good prompt(system) is a trivial task, but it's a different skillset.

  • nnevatie 1 hour ago
    Considering that Claude sometimes confuses the identities of itself and the user, this might as well cite the user - "you just said X".
  • doginasuit 1 hour ago
    I'm positive there are use-cases for this tool but after several years of working with LLMs, hallucinations have become a non-issue. You start to get a sense of the likely gaps in their knowledge just like you would a person.

    Questions about application settings, for example, where to find a particular setting in a particular app. The LLM has a sense of how application settings are generally structured but the answer is almost never spot on. I just prefix these questions with "do a web search" or provide a link to documentation and that is usually enough to get a decent response along with citations.

  • pjmalandrino 1 hour ago
    Why are you building your own DAG system instead of just using LangGraph? You could cut complexity and focus on what actually matters : the claims, evidence tiers, conflict detection.

    Also, embedding claims in the Chain of Thought instead of post-processing them might force rigor earlier in the pipeline.

    (Assuming the zero-deps constraint isn't a blocker?)

  • 4ndrewl 1 hour ago
    I tried it with the Car Wash question (it failed) and all it's claims were mostly fuel consumption or emissions related, and this

    "factual (ai) Weather, traffic, and personal urgency are the only significant variables that could tilt the decision toward driving."

    My gut feeling is that if this could be done, it would be a core part of one of the model provider's output.

    • Lionga 1 hour ago
      This is akin to writing "No hallucinations" in your proompt. So strange that even HN thinks it is worth anything.
  • 0x3f 1 hour ago
    Well, I would have tried it but the website kills Firefox.

    Hard to see how you could really make this work though. You might as well just add "fetch and re-read all sources explicitly to make sure they are correct" to a normal prompt.

  • hdemmer 1 hour ago
    Used the demo app:

    Q: Who directed Scarface? A: - 1983 film (most commonly referred to): Directed by Brian De Palma. - 1932 original version: Directed by Michael Curtiz.

    This is wrong. The 1932 movie is by Howard Hawks.

  • Gijs4g 1 hour ago
    The website fully stutters to a halt.

    Managed to ask if Ali Khamenei is still alive. It answered "Yes, ..."

  • todotask2 1 hour ago
    The interfactive app caused my mouse moving so sluggish on macOS.
  • tomlockwood 1 hour ago
    I love how at the beginning of this boom people were talking about how heuristics applied to AI outputs were short-term gains disguised as real progress. Now it seems like almost every new tool is a series of heuristics applied to AI outputs.
  • est 1 hour ago
    Looks like it's just find sources in Confluence against bullshit Claude Code says?

    I thought it can search for online cites.

  • i_cannot_hack 1 hour ago
    [dead]