14 comments

  • arowthway 1 hour ago
    The financial market things are over my head and I don't have a dog in the game, but I think "Nobody is replacing salesforce with their internally vibe coded software" is just false? Both taken literally [0] [1] and as denying the general trend. Just in my company we already replaced WMS software subscription with own solution, and I wouldn't be able to write it fast enough and maintain it by myself without the use of Claude Code. I'd say "Not perfectly or with every edge case handled, but well enough that the CIO reviewing a $500k annual renewal started asking the question “what if we just built this ourselves” " is an accurate description.

    [0] https://lovable.dev/blog/how-a-startup-replaced-a-salesforce...

    [1] https://seekingalpha.com/news/4144652-klarna-shuts-down-sale...

    • rwmj 1 hour ago
      I didn't think something could be worse than everyone using Salesforce, but everyone using a different, constantly broken, incompatible SF clone that no one understands may be that.
      • grebc 1 hour ago
        Lol give it 12 months.
    • Ozzie_osman 1 hour ago
      Agreed. If anything, it puts downward pressure on pricing. Even if the CIO still buys Salesforce or whatever other tool, they won't be willing to pay as much.
  • BrenBarn 2 hours ago
    > Commented [1]: what if pee pee was poo poo

    Rarely do I read something that starts off with such promise!

    • pityJuke 1 hour ago
      Don’t encourage his diaper fetish! [0]

      [0]: https://bsky.app/search?q=from%3Aedzitron.com+diaper

      • GaryBluto 1 hour ago
        I thought there would be one or two results, perhaps a result of poor reoccurring phrasing. Nope.

        >i'm going to change your diaper and burp you

        >Carlito is a very good boy Go piss in your diaper you big baby

        >He doesn't care. He is a big baby who filled up his diaper with pee pee and poo poo

        >you are a big baby and i am going to change your diaper and burp you <

        >To be clear I call executives of multi trillion dollar companies scumbags and if you can't deal with that I'm not sure what to do. Burp you? Change your diaper?

        >I am going to change your diaper and burp you

        >Yeah man it's real authoritarian to say your second name is doodoo. Go change your diaper you big baby

        >Yeah because you're a big baby with a big full diaper

        >Hello sir this is your uber outside. I have your order from the diaper store

      • danlitt 1 hour ago
        "Search is currently unavailable when logged out"
  • seanhunter 2 minutes ago
    It’s sort of disappointing to me how on both sides it seems hard to have any sort of rational perspective. I find both the Citrini memo (and the subsequent market reaction) and Ed Zitron’s critique of it to be wildly off-base.

    I wish everyone would just calm down a bit.

  • Lariscus 2 hours ago
    I do enjoy a good Ed Zitron sneer. The fact that the original article moved markets says a lot about the critical thinking skills of stock market traders.
    • davorb 1 hour ago
      You should look into how he destroyed the small indie MMO Darkfall and gave the game 2/10 without ever playing it, in a Eurogamer review a few years ago. The developers had receipts and could prove that he hadn't played it.

      It doesn't have any material effect on this article, but it says something about his ethics.

      • Lariscus 1 hour ago
        Its word against word in this situation. The logs prove nothing as they are easily modifiable and the devs had a good reason to do so.
  • decimalenough 2 hours ago
    • Thanemate 1 hour ago
      I'm somewhat halfway through the original memo, and I hate the fact that kernels of truth lie here and there. For example, I used to work as a full-stack developer for about 2 years, and now I got forced into what the memo calls "gig economy" just to pay the rent, because companies slowed down hiring junior developers thanks to... Honestly, I don't really care at this point what I have to thank for.

      All I know is, whenever I read testimonies from people whose companies suddenly decided to force LLM usage for productivity to be "AI first", having colleagues opening PR's who are only machine reviewed with implementations they cannot justify themselves outside of "Claude wrote it", makes me burnout just reading them. And it's only going to get worse until it becomes better, but not for the developers.

      Honestly, the one thing that I could see justify all the investment companies make for LLM-assisted coding is the full automation of software production. I can only see the current state of things as the "end game" for them, only if they suddenly decide to jack up pricing to tap directly on the corporate budget and not the individual developer's budget.

  • simianwords 1 hour ago
    Ed's main thesis is that cost is unsustainable for AI companies but this is clearly wrong.

    The unit cost is going down and has gone down by more than 20-30x over the years. Sure, the fixed cost of training is going up but that's because of the implied returns. Once the returns to training don't happen, it would simply reduce modulo cutoff date updates. The companies have a choice to just stop training and focus on inference cost reduction.

    What am I missing here? Unless the consumers decide that they are no longer willing to pay the same amount as before and their expectations are rising with prices falling, what else?

  • returnInfinity 2 hours ago
    Offtopic - The success of coding agents must be Ed Zitron's nightmare.

    He has been a perpetual bear

    • jbreckmckye 2 hours ago
      I don't think so.

      His argument is not "this tech doesn't work", but rather "these businesses aren't economically viable"

      And that the smoke and mirrors accounting and perpetual thirst for more billions indicates just how unviable it is

      Whilst he does dunk on LLM capabilities, the framing is the business angle - can Anysphere etc. actually form a moat and make a profit?

      • simianwords 2 hours ago
        >His argument has never been "this tech doesn't work", but rather "these businesses aren't economically viable"

        Why? because of cost?

        • jbreckmckye 1 hour ago
          Cost, debt, difficulty forming a moat, gap between what the product promises and what it can do, and the difficulty actually raising capital required.

          His style is acerbic and (imo) excessive sometimes. But he's also one of a minority of journos actually looking at the numbers and adding them up. Which seems to be a rarity

          • simianwords 1 hour ago
            cost is going down 20x, 30x over the years so he's wrong about this.
          • hiddencost 1 hour ago
            Disagree. He's cherry picking an extremely limited subset of numbers, based on a weak understanding of the industry and a lack of access to a lot of private data, and taking advantage of vulnerable people.
            • jbreckmckye 1 hour ago
              I'm not sure how anyone can respond to that, without asking you to divulge that private data
            • danlitt 1 hour ago
              >taking advantage of vulnerable people

              What on earth do you mean by this? Who is getting taken advantage of?

        • JanneVee 1 hour ago
          Well from my point of view. When they talk about gigawatt datacenters, then yes it is economically nonviable. You just need to know the scale of a gigawatt to realize that we need to start building power plants and fortifying the power grid to ship a gigawatt of power to a single location. Until the build out which takes years mind you, it is competing with other consumers of power. Lets take another huge consumer of power like a large steel mills use 100 megawatt. So if that power becomes more expensive because of datacenters, then the price of steel will go up. And if the price of steel goes up it affects a lot of things in the economy.

          We are facing a situation that the short term effects are on memory and storage prices going up and lack of jet engines. Long term we wont be able to build actual buildings and ships without financing it with even more debt than today and everyone in the economy is going to service that debt through the price.

          • simianwords 1 hour ago
            but the costs of inference have been going down 20x to 30x over the years. so how can you tell it is nonviable? unless you are saying they are not paying market rate for the inference
            • JanneVee 7 minutes ago
              So, they still booked up all the ram and ssd in the world and still going to use gigawatts of power. The price of energy production is not going to go down 20x and 30x it just means that they can cram in more inference on the same energy consumption if the cost goes down. But they aren't paying the market rate for inference because everything is subsidized with debt and investors money to scale as fast as possibly. They are flushed with money and that is why they can book up all silicon production.
      • simianwords 1 hour ago
        I don't think Ed doesn't comment about the actual tech. Here are some things he has said before and please tell me if these still hold in the spirit?

        > You cannot "fix" hallucinations (the times when a model authoritatively tells you something that isn't true, or creates a picture of something that isn't right), because these models are predicting things based off of tags in a dataset, which it might be able to do well but can never do so flawlessly or reliably.

        ChatGPT is fairly reliable.

        >Deep Research has the same problem as every other generative AI product. These models don't know anything, and thus everything they do — even "reading" and "browsing" the web — is limited by their training data and probabilistic models that can say "this is an article about a subject" and posit their relevance, but not truly understand their contents. Deep Research repeatedly citing SEO-bait as a primary source proves that these models, even when grinding their gears as hard as humanely possible, are exceedingly mediocre, deeply untrustworthy, and ultimately useless.

        This is untrue in spirit.

        > You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.

        Imagine if they’d done something else.

        Imagine if they’d done anything else.

        Imagine if they’d have decided to unite around something other than the idea that they needed to continue growing.

        Imagine, because right now that’s the closest you’re going to fucking get.

        This is what he said in 2024. He really thought ChatGPT is not in the future.

        There are so many examples and its clear that he's not good faith and has consistently gotten the spirit wrong.

        • energy123 1 hour ago
          This guy sounds like an uninformed jackass.

          Look at Gemini 3.1 Pro on the AA-Omniscience Index, which measures hallucinations. It's 30, previous best was 11.

          https://artificialanalysis.ai/evaluations/omniscience

          With the amount of talent working on this problem, you would be unwise to bet against it being solved, for any reasonable definition of solved.

    • Lariscus 2 hours ago
      Which success? I still see those things churning out laughably wrong code at every turn.
      • ubercore 2 hours ago
        It's not a point and click code machine, but it's laughably wrong to say they just churn out laughably wrong code.
        • noosphr 1 hour ago
          Half of developers are below average. Half of developers say that the code Ai produces is amazing. Would you like a venn diagram?
          • dash2 1 hour ago
            You’re saying that AI is already good enough to replace 50% of developers. Sounds like you agree it will be very important.
            • rwmj 1 hour ago
              He's saying the good half of developers have to deal with the increased slop output of the bad half. Probably will be overwhelmed by it, in the end.
              • ubercore 5 minutes ago
                It _can_ produce slop if people stop thinking. I've also seen it do just fine, when people know when, where and how to use it. That's the part that frightens me, not the code it makes itself.
      • cleaning 1 hour ago
        You're telling on yourself here
      • delaminator 2 hours ago
        I guess everyone stopped using Claude Code already then, if it doesn't work.
        • coldtea 2 hours ago
          It doesn't work precisely because people are using claude.

          If it worked, there'd be no people using it.

          • Kye 1 hour ago
            Nobody goes there, it's too crowded.
            • coldtea 1 hour ago
              "It's not revolutionary automation if every 'automator' has an operator attached" - doesn't take a Yogi to figure out...
      • simianwords 2 hours ago
        [flagged]
        • danielbln 1 hour ago
          The copium is strong in these threads. There was merit to that a year or two ago, but at this point it's just sticking the head in the sand.
    • simianwords 2 hours ago
      Some one should compile concrete predictions that he made vs how they turned out.
    • vv_ 2 hours ago
      Have they been successful?
    • luke-stanley 1 hour ago
      This is not off-topic at all!
  • hresvelgr 2 hours ago
    > "What if our AI bullishness continues to be right...and what if that’s actually bearish" - what if pee pee was poo poo

    Despite the vulgarity, it is exceptionally illuminating to how much some of these slop pieces are just a mere pretension of rhetoric. I see this pretty consistently with a lot of the material I come across on the job that's gone through the LLM meat-grinder.

    Also, the comment made me giggle like a little kid.

    • Jordan-117 1 hour ago
      What's pretend-rhetoric about it? They're positing agents will prove to be very capable, but that this would ultimately be a bad thing by automating away too much of the economy. You can argue whether that's plausible or not, but it isn't an incoherent or vapid argument.
      • piker 1 hour ago
        I suggest you read the annotation if that question isn't just rhetorical. I'm not familiar with Ed, but he has a pretty good take down in here if you can get past his somewhat juvenile writing style.

        It is a problem when your doomsday timeline for obsolescence is behind the minute you publish. The memo itself was fantasy doomer porn on day 1.

  • chvid 1 hour ago
    He is funny and entertaining but does he provide constructive investment advice? I am not so sure.
  • notachatbot123 2 hours ago
    Where does the title come from?

    What is this document?

    What is the context?

    • nielsbot 2 hours ago
      Good questions. Here is the author's BlueSky post about it:

      https://bsky.app/profile/edzitron.com/post/3mfkc63h6222l

      > "Here is an annotated version of the Citrini Memo with my own intro. It is analyslop - scare-fiction written to ingratiate AI boosters and analysts/traders with tales of ultra-automation and socialist data center policies. Shameful that the markets reacted at all."

  • shimonabi 2 hours ago
    I've also heard Cory Doctorow recently offer a similarly dismissive view, describing AI as "just statistics".
  • hiddencost 1 hour ago
    I've started to feel like Ed Zitron is actively hurting people I care about.

    I'm lucky to have worked in the field for a long time, and be able to spend a lot of tokens. In the last month it's become clear to me that the tech works. The science is done, and what's left is engineering.

    There are a lot of risks and mitigations and theory to build, but it's all solvable. The tech isn't mature, but neither was the Internet 30 years ago. And we built transatlantic cables and ran new wires to everyone's house.

    People I care about, engineers with 20 years of experience, are having mental health breakdowns, caused by Zitron's work. They insist the tech will never work, and avoid learning about it, becoming progressively more paranoid and isolated. I'm trying to be supportive and help them start to recover, but it's slow going.

    If someone is having a crisis about this, I hope they start talking to a therapist. I don't need them to agree with me, but I do need them to not harm themselves.

    • vv_ 49 minutes ago
      > They insist the tech will never work, and avoid learning about it, becoming progressively more paranoid and isolated.

      They can always learn the technology later, when and if it proves itself to be useful :) I personally don't understand the hype, even after using Claude and other AI tools - but perhaps that will change in the future.

    • v3xro 1 hour ago
      There's nothing to recover from, what are you even talking about? I'm not a token user (and I can't make predictions about the future and whether it will force me to use token but still). That the industry is collectively having a delusion about what constitutes good software (in all senses of the word - functionality and consequences for society) is clear to see, something I too fear we might never recover from, but I stand quite clearly on the side of people not of corporations hoping to extract more more more.
    • bsshdjnddn 1 hour ago
      [flagged]
  • frozenseven 1 hour ago
    Good to see people are finally turning against this grifter.

    "AI fake, AI poo poo, AI going away!" is the only argument he ever had. Nothing more.

  • GaryBluto 2 hours ago
    Ed Zitron, from what little I have heard of him, seems incredibly irrational. I don't think I've ever seen anybody stick their head deeper in the sand more than I've seen him do.

    It's one thing to dislike or even detest something, but to constantly claim it is worthless and without use when people are already benefitting from it everyday is nothing short of delusion.