6 comments

  • aaronbrethorst 2 hours ago
    Here's the part nobody talks about

    This feels like such an obvious LLM tell; it has that sort of breathless TED Talk vibe that was so big in the late oughts.

  • Darkskiez 2 hours ago
    Yay, we've found another way to give LLMs biases.

    I can see how obscure but useful nuggets of information that you rarely need, but are critical when you do, will be lost too.

    If the weighting was shared between users, an attacker could use this feedback loop to promote their product or ideology, by executing fake interactions that look successful.

  • kburman 1 hour ago
    This is a recipe for model collapse/poisoning.
  • sbinnee 1 hour ago
    AI slop indeed. But it caught my eyes nonetheless because I have been doing some work around the same concept. These days I found that GitHub is just flooded with "AI Agent Memory" modules, where in fact they are all skills-based (meaning all text instruction) solutions.
  • stephantul 1 hour ago
    Stop the slop!
  • littlestymaar 1 hour ago
    Why do people submit AI slop like that in here?
    • never_inline 1 hour ago
      There's certainly an overlap between the crowd working on AI (since it's the current hype) and the crowd finding this sort of hyped up advertisement speak impressive.

      It's pretty obvious that embeddings are not enough (and you need hybrid search or similar) for people who understand how they work. And yet there had been a flood of chat UIs in 2024 which are basically embed + cosine similarity, all having 10k+ stars. Now it seems to have slowed down.

      All is because the taste for careermaxxing outweighs the taste for rigor in our industry. It's very sad.