6 comments

  • spuz 2 hours ago
    Why can't we just call it "play". That is what we used to call doing things without a purpose.

    I wish people would disclose when they used an LLM to write for them. This comes across as so clearly written by ChatGPT (I don't know if it is) that it seriously devalues any potential insights contained within. At least if the author was honest, I'd be able to judge their writing accordingly.

  • cgio 1 hour ago
    Quoting: “ What I’m describing is different. I’ll call it Vibe Discovery: you don’t know what you’re building. The requirements themselves are undefined. You’re not just discovering implementation - you’re discovering what the product should be.

    The distinction matters:”

    What is it with this pattern of phrases that screams LLM to me? Whenever I come upon this pattern I stop reading further.

    • roywiggins 37 minutes ago
      Not only does it scream LLM output, I happen to find it almost always grating. It's fine enough when something is labeled as AI output, but when it's nominally a human-authored document it's maddening.

      Claude tics appear to include the following:

      - It's not just X, it's Y

      - *The problem* / *The Solution*

      - Think of it as a Z that Ws.

      - Not X, not Y. Just Z.

      - Bold the first sentence of each element of a list. If it's writing markdown, it does this constantly

      - Unicode arrows → Claude

      - Every subsection has a summary. Every document also has a summary. It's "what I'm going to tell you; What I'm telling you; What I just told you", in fractal form, adhered to very rigidly. Maybe it overindexed on Five Paragraph Essays

    • lithocarpus 1 hour ago
      One way I'd describe it is LLMs say lots of things that technically make sense but aren't quite like anyone would normally say it.

      And secondly they like to use more nouns for things, in my experience.

      Of course all this is just what I observe currently and could well become different for better and worse in future versions.

      • trollbridge 1 hour ago
        It’s just like my friend has a distinct way of speaking. LLMs also have a distinct voice.
        • roywiggins 27 minutes ago
          Right, which is why it's so strange to suddenly see every other readme and blog post that gets shared on this site speaking with the same tone of voice. Dead Internet theory finally came here.
  • GaryBluto 38 minutes ago
    The end result is interesting but I'd prefer the blog entry itself to be human-written.
  • furyofantares 2 hours ago
    You posted the prompt to the game, care to post the prompt to the blog post? I don't care what an LLM thinks about how you built your game. I would like to know what you think, but I'm not going to try to salvage it from an LLM-generated blog post.
  • jackmhny 3 hours ago
    were approaching peak slop every day
    • phoronixrly 2 hours ago
      Today on the front page there was an obviously vibe coded python script that pulls OSM data and slaps a colour scheme on it. Of course the data was skewed, because apparently LLMs don't do projections...

      I gave up on the first non-ironic 'You are absolutely correct' comment... What is even real...

      • daveguy 2 hours ago
        To be fair, vibe discovery is a lot more viable than vibe coding. Vibe coding implies the LLM output is acceptable. Vibe discovery implies a human in the loop, because LLMs can't "discover". They have no inate preference based on their lived experience in the same sense that a human or any biological organism does.
  • kikkupico 2 days ago
    [dead]