I'm pretty sure he's talking about companies and people outsourcing their decision making and thinking to AI and not really about using AI itself.
I don't think using AI to write code is AI psychosis or bad at all, but if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter. They literally post screenshots of ChatGPT as their thinking and reasoning about the topic instead of just doing a little bit of thinking themselves.
These things are dog shit when it comes to ideas, thinking, or providing advice because they are pattern matchers they are just going to give you the pattern they see. Most people see this if you just try to talk to it about an idea. They often just spit out the most generic dog shit.
This however it pretty useful for certain tasks were pattern matching is actually beneficial like writing code, but again you just can't let it do the thinking and decision making.
Correct. I use AI a ton and I'm having more fun every day than I ever did before thanks to it (on average, highs are higher, lows are lower). Your characterization is all very accurate. Thank you.
I thinking that it’s quite a different experience going all Jackson Pollock with AI in your own studio on your own terms, compared to the sorry state of affairs of having 100s of Pollocks throwing paint around wildly within a corp to meet a paint quota.
I can't think of a single case of any AI content, be it prose or code, where I thought "I wish I had written that". With AI code, it's more like I wish I hadn't let the AI write that.
We’re using Copilot at work to build reporting and automation tools. Nothing ground breaking, but very useful and tailored to our needs.
Frankly without AI assistance many of these tools just wouldn’t exist at all. We can build stuff in 6 weeks part time as a side project that would have taken at least 3 months full time, and therefore would not have been feasible. Then we can iterate on it at least 2-4 times faster than with hand coding.
So I’d love to have an extra few developers to just work on that stuff full time, but I don’t.
Whether that means our organisation spend on AI overall is a positive, I really can’t say. Quite possibly not, but my team are getting real benefits.
I’m building reporting for my company and what you said mirrors my experience nearly 100%.
I’m a backend developer so I know what it takes to build a half decent reporting system. Writing all those queries, slice and dice charts and what not takes real time and effort. All that has been outsourced to Claude Code. I now focus on ensuring that the system is sound architecturally and that useful reports are being surfaced.
It's the new "counting lines of code". I think many companies are so terrified of falling behind that they're irrationally floundering, trying to appear like they're "with it".
Counting lines of code starts to look incredibly sane compared to this, where you’re not just counting lines of code, you’re paying for another company for every line produced. There’s exactly one winner here and it’s not any of the companies using AI.
Actually, it's even more than that, right? Economically, it is pumping up/inflating the bubble some more in a perverted way, where it is not the people themselves believing some horseradish, but their employer forcing them to pump it up more. Quite insane.
Yup. My friend said his boss has told them basically that they HAVE TO (do all the AI things) because now ‘our competitors will use AI’ and surpass their product.
In my humble opinion good ideas (what to build) are a big part of the bottleneck and those aren’t substantially in greater supply with AI.
> good ideas ... aren’t substantially in greater supply
Which is sad because they should be. People should be freed up to think and create better things, instead these companies seem to be doing the equivalent of locking their employees in stalls like they do on some animal farms, so they can churn out 'results' ever faster.
> People should be freed up to think and create better things,
Good ideas will never ever be prioritized in the vast majority of companies because good ideas cannot be quantified and turned into performance metrics. At least not without invoking Goodhart's law (see: the academia).
Can we combine this with the infinite monkey theorem? If we have an infinite number of Pollocks throwing paint at an infinitely large canvas surely they are going to create any piece of art we can imagine...
I’ve had to do a ton of SQL stuff lately, which I haven’t really worked with since the late 90s. ChatGPT has been a godsend, not just for me, but for our only coworker who knows SQL well, whom I’d probably be bugging several times a day at my wits’ end.
But no one cares about those kinds of productivity gains. Just the ones that will completely replace us.
I find SQL and data(bases) in general to be LLM’s Achilles’ heel. Databases are rarely under version control, so the training data only has one half of the knowledge.
My comments are more in the context of OLAP queries and other non-normalised data often queried via SQL.
I train non-LLM transformer models on (older and rarer) datasets, and automating the ingestion of sprawling datasets with hundreds of columns, often in a variety of local languages and different naming conventions adopted over decades, with quite a few duplicated columns…. The LLMs perform badly, it’s nigh impossible to test (for me as a user in prod) and it’s nearly impossible for the LLM companies to test (in training) to RLVR and RLHF this.
Just use an LLM to make a good knowledge base for the databases. Based on schema info and production queries. An agent can use that to write queries that work.
That's interesting - SQL is one of the places I find them the strongest - I think there must be an insane amount of training data out there for SQL. But mostly I'm asking them for ad hoc report queries. Nobody cares if they're bad SQL, they just want to know how many signups there were in March that didn't tick the marketing box. Sounds like you're pushing their capabilities a lot further than I am though - I just want to perform arbitarily complex queries on 3NF data.
I'm the old school type who writes out a document that explains what I plan on doing in markdown even if it's generic like "a window with x and y buttons" and the logic flow and then use that to have ai write a plan with me before I send it off to execute it. This has worked super well.
I do enjoy giving the frontier models wacky projects that I can't even find examples of how to do online but I don't expect any results or need them and some have done really well with it while others fall on their face (models)
I'm amazed you think that instead of using an LLM that someone will go buy a book and spend a week learning something that, judging by the fact that they last used it 30 years ago, likely won't be relevant for them soon.
It's not only that I rarely use it, it's also that it's ugly. It's Relational Cobol. It's as loveable as Oracle. The vendor specific dialects don't even agree on how to do recursive queries do they?
Unfortunately I am very good at forgetting things I resented having to learn, and SQL is definitively one of them.
SQL is (was?) one of my strongest skills, I enjoy it a lot, and I still reach for the LLM. It's just faster than me, and when it goes wrong (rarely) I can correct it in plain English.
This is fine for a moderately sized query. When your queries start taking in 8 joins and 20 fields per table because you're running queries on Presto with 5 TB of data, not only is it drastically better at writing (because it doesn't mess up the fields), you can ask it to try the query 5 different ways to help you land on the most optimal.
This is a great example of AI tech-debt and fragility.
An eight-join query is going to be nigh on unmaintainable should the requirements change, leading to a change-break-change-break spiral as your preferred coding agent tries to fix its previous fixes.
Maybe the wise way to use AI would be to sort out the schema.
This feels wrong. 8 joins is almost certainly reporting stuff, not transactional. Contrary to what some SQL-averse devs think, 300 lines of SQL is actually more maintainable than the equivalent ~1000 lines of application code. It's also much faster. And I do think that's the real conversion, because SQL is a much higher level language than currently available application languages. It's also declarative in nature, which helps maintainance.
A highly normalized DB can easily end up with 8 joins required for some function. That's really not out of the question. "Sorting out" the schema then would be... denormalization, which is a thing, but you need to know why you're doing it. And I think 8 joins isn't enough of a reason.
""I recently used AI as a simple fill-in replace tool for words in my code. All it had to do was replace only a class = A , Parent = "X" with Parent = "Y" but because it's AI I had more expectations and I told it, here are the exceptions anything with tag: do-not-modify , you do not fucking modify. Also any Parent with a child that has class = A must also be modified according to the child. (It didn't understand that part even after I made it write a memory file for itself and fed it to it ... it only worked after the 5th time after I turned reasoning ON which is just it doing 20 times the guess work on itself like a dumb parrot judging another dumb parrot over 20 times)"""
The AI also did not figure out to apply those tag: do-not-modify to 3 more files that shared similar if almost identical text. The only difference was... the line & colon order was different... and of course my classes, parents, childs did not have those explicit "tag: do-not-modify" but otherwise the names, definitions, details of the classes, parents, childs were the exact same in my 3/4 files compared to the 1st file ... and the AI could not figure that out even after I told it.
Right now because the AI is doing guesswork AND WILL ALWAYS BE DOING GUESSWORK/BLINDWORK you(as a trillionaire and boss and executive) unfortunately still need a quality programmer who knows what the fuck he's talking about but quality programmers don't waste time with neuronal networks doing guesswork. All you'll get are codemonkeys who barely know the basics to catch the AI when it's doing bullshit.
So fortunately for both codemonkeys (or anyone who knows some basic programming) and actual serious programmers... you'll still keep your JOB.
Which is sad cause the whole point was to replace codemonkeys who barely know javascript and if you put them to code in assembly they don't know the first grammar rule about it.
> outsourcing their decision making and thinking to AI and not really about using AI itself
> I use AI a ton and I'm having more fun every day than I ever did before
With respect, this is what makes me worry.
If someone is a user of AI, can they really tell the difference between "outsourcing" and "using"? I worry that a lot of people will start out well-intentioned and end up completely outsourced before they realise it.
Hi Mitchell. Psychosis is a serious psychiatric condition that can be induced or triggered by AI. “AI psychosis” in this context is a misuse of a clinical term. Your tweet describes a disagreement on a value judgment that boils down to “move fast and break things” with high trust in AI outputs vs going all in on quality and reliability with low trust in AI. It’s an engineering tradeoff like any other.
Claiming that the people who disagree with you must be experiencing a form of psychosis, experiencing actual hallucinations and unable to tell what is real, is a weak ad hominem that comes off no better than calling them retarded or schizophrenic.
If you genuinely think one of your friends is going through a psychotic episode, you should be trying to get to them professional help. But don’t assume you can diagnose a human psyche just because you can diagnose a software bug.
He uses "AI psychosis" as a description of people that are overzealous on AI. He is obviously not a person that can or would diagnose mental illness.
To the wider audience on HN the phrasing is pretty clear. An outsider with a tiny bit or intellectual charity wouldn't come to conclusions like you do.
People would understand what he meant if he called someone awkward “autistic” too. It’s wrong to use medical terms as slang because it erases the actual meaning and disregards the lived experience of people who have been through the condition. People who have been around psychosis would come to the same conclusion. The majority of the population not having that exposure doesn’t make it right. It’s tasteless and inappropriate.
Using terms from domain metaphorically in another is a common and, I think, useful way of communication. While a view like yours has genuine merit, especially for a subset of the population who have experience personal or otherwise, with the medical condition, I think it's overly restrictive and counter productive to label it as outright tasteless and inappropriate.
Yeah, but AI psychosis can also be used to mean the stronger thing that the parent comment refers to -- something like AI-induced psychosis, which was how I originally understood the term:
Well, I agree with you that the parent comment is wrong inasmuch as it suggests we can't tell from context that mitchellh is using the term to mean "a value judgment" instead of "a form of psychosis". We can tell.
But I agree with the parent comment in that we shouldn't use the term "AI psychosis" to mean "a value judgment" instead of "a form of psychosis", because "AI psychosis" has already been used for 2.5 years to mean "a form of psychosis".
Psychosis does not require hallucinations. Delusions are sufficient.
The key factor is losing touch with reality, which results in individual or collective harm.
There is also such a thing as mass psychosis, and those are unfortunately a more difficult situation because the government and corporations are generally the ones driving them, and they are culturally normalized.
Yes. I was offering examples. Again, having a difference of opinion is not a delusion.
If he meant mass psychosis, he should have said mass psychosis. And again, since he is not a public health scientist or any flavor of psych professional, he probably shouldn’t make those proclamations. And should probably call for a wellness check instead of posting on social media if he were truly concerned for their health.
I don't think this is all psychosis but more like extreme groupthink.
For people who are considered neurotypical, social coherence often overwrites reality. Its a mechanism for achieving consensus withing groups while spending the least amount of brain compute energy. Same goes for social metainfo tagged messages, they are more likely to influence reality perception, subconsciously. E.G: If a rich guy says you should be hyped the people who wanna get rich will feel hyped and emotional contagion can spread between people who belong to the same "tribe"
It's very visible for us atypical folk who can't participate well in groupthink at all
I guess at a company of seven, if two people are making the executive decisions and the two people are drinking the same AI kool-aid and the other five people are dutifully following these executive decisions, the whole company can be considered to be under this condition.
I would add to this that there's actually a social function to "costly" beliefs, which is that they signal allegiance to the in-group.
A practice (or a fashion) has more social value to the degree that it is absurd, because it signals the person is able and willing to align with the group at personal cost.
This is easiest to see in some insular religious communities.
Normie culture is quite similar: a vast complex of ever-shifting shibboleths which signal, "I'm one of you. You can trust me."
It signals the person is able and willing to follow the rules, to make themselves predictable, easier to understand and cooperate with.
That is true, it's beneficial for social survival.
But what I find fascinating is how the groupthink mechanism alters the subjective reality of people.
Lies or fantasy becomes reality if the entire group believes it and people truly believe the collectively accepted things to be real.
It just makes me think about consciousness overall or the lack of it, because all these things are mainly governed by subconscious mechanisms in the brain.
We are not the same when it comes to levels of consciousness and if the group mechanism demands less of it, people have no conscious choice about it
Having a difference of opinion can absolutely be a delusion. For example, I think you're probably not God. If you thought you were God, then we'd disagree, and you'd also be delusional.
I use that example because I have literally seen people fall into delusions of thinking they're God after talking to AI enough. That's shit is scary, for real.
was looking for this comment. this post is highly inappropriate and very inaccurate. this should be at the top. too many people are throwing around the word psychosis without knowing what it means. if someone is truely going through psychosis you get them help!
Garry Tan has been the primary crusader for AI driven decision making. I'm sure his position is more nuanced, but his twitter driven communication makes him appear like a caricature of a man in AI psychosis.
When the head of YC champions AI driven decision making, companies will inevitably be influenced into doing exactly that. It's unfortunate, because AI is generational technology and the hyperbole distracts from the real sea change occuring in labor markets everywhere.
What I'm seeing is a little eternal September of support tickets about programs that fail to interface the JSON API of a customer of mine. The API is always allucinated. In the best case there are out of place attributes. Often they don't exist at all. I've seen x, y, width, height when we have only top and left. Of course no human read the documentation. Those are probably founders vibe coding a client without the technical competence of understanding the API doc on Postman. That is understandable. Unfortunately they don't even have the competence of pointing their AI to Postman in the right way. My custumer assessed that they will always find a way to do a mistake despite any mitigation from our side. What I do is replying to those tickets with line by line comments of the allucinated JSON. I never talk about AIs because I might hurt the pride of some of them and, who knows, some little mistakes could be from real junior developers. Sometimes the tickets are followed up by more puzzled ones, sometimes they fix the problem. Probably they copy and paste my reply to their bots.
I've heard the same thing mentioned by a close friend building integrations. They are helping/supporting real use cases but they decided not to help vibe coder founders without an understanding of how APIs work etc. It's just too big of a gap to cover even for larger companies with strong support.
Seeing this too. Customer support tickets are all AI now. The random bolded words, the em dashes, they way where if you KNOW what is actually happening, they are slightly off or just WAY off.
Several people I know have already gone through phases like this. When you're doing it alone there is a moderating factor when their friends and family start calling them out on their behavior or weird things they say.
I can't imagine how bad it would be if your employer started doing this from the leadership. You'd be pressured to get on board or fear getting fired. Nobody would be trying to moderate your thinking except your coworkers who disagree with it, but those people are going to leave or be fired. If you want to keep your job, you have to play along.
I have a friend that is a junior in a security-oriented sys-admin/network engineer type role. They have been doing the job for only a bit over a year. No background in programming.
Their entire organization has been handed Codex/Claude and told to "go all in on AI" and "automate everything". So the mandate is for people that do not know how to code and have the keys to the castle to unleash these things upon their systems.
This is at a large organization with tens of thousands of employees.
I am waiting with bated breath for the ultimate outcome!
From what I have seen, most corporate it security people are at a service desk level at best. They are tool runners who don't really understand what the tools spit out, they just go bug other teams about it.
this is exactly what is happening. instead of building true AI culture around thoughtful adoption of AI strengths while defending against weaknesses, they're coming up with bullshit heuristics like "every repo has a CLAUDE.md", watching private token usage dashboards, and terrorizing everyone into doing it (or lose your job).
this leads to naive AI adoption, which is the worst of both worlds (no real speedup, out sourcing thinking, ai slop PRs, skill rot).
I suspect we're going to see this in many corporate environments soon, if we aren't already
> your coworkers who disagree with it, but those people are going to leave or be fired.
Personally I expect that I will be this person soon, probably fired. I'm not sure what I will do for a career after, but I sure do hate AI companies now for doing this to my career
when you outsource thinking to AI, you get that magical speed up. the agent is making decisions for you, so things move at agent speed. it often makes decisions without telling you, and the final "here's the plan" output often requires you to understand the problem at great depth, which requires return to human speed, so you skim and just approve.
the trick is to be mindful, aware, and deliberate about what decisions are being outsourced. this requires slowing down, losing that absurd 10x vibe coding gain. in exchange, youre more "in-the-loop" and accumulate less cognitive debt.
find ways to let the agent make the boring decisions, like how to loop over some array, or how to adapt the output of one call into the input of another.
make the real decisions ahead of time. encode them into specs. define boundaries, apis, key data structures. identify systems and responsibilities. explicitly enumerate error handling. set hard constraints around security and PII.
tell the agent to halt on ambiguity.
a good engineer will get a 2x or 3x speedup without the downsides.
> find ways to let the agent make the boring decisions, like how to loop over some array, or how to adapt the output of one call into the input of another.
Those kind of advice ultimately don't matter. If you're familiar with a programming project, you'll also be familiar with the constructs and API so looping over an array or mapping some data is obvious. Just like you needn't read to a dictionary to write "Thank you", you just write it.
And if you're not, ultimately you need to verify the doc for the contract of some function or the lifecycle of some object to have any guaranty that the software will do what you want to do. And after a few day of doing that, you'll then be familiar with the constructs.
> make the real decisions ahead of time. encode them into specs. define boundaries, apis, key data structures. identify systems and responsibilities. explicitly enumerate error handling. set hard constraints around security and PII.
The only way to do that is if you have implemented the algorithm before and now are redoing for some reason (instead of using the previous project). If you compare nice specs like the ietf RFCs and the USB standards and their implementation in OS like FreeBSD, you will see that implementation has often no resemblance to how it's described. The spec is important, but getting a consistent implementation based on it is hard work too.
That consistency is hard to get right without getting involved in the details. Because it's ultimately about fine grained control.
If there's one thing I know about users is that they're never certain about whatever they've produced.
The way I put this to myself is that AI gives “correct correct answers and incorrect correct answers”.
They almost always generate logically correct text, but sometimes that text has a set of incorrect implicit assumptions and decisions that may not be valid for the use case.
Generating a correct correct solution requires proper definition of the problem, which is arguably more challenging than creating the solution.
It’s simpler than that - it’s a guessing machine that has superior access to a whole load of information and capacity to process at a speed at which we humans cannot compete.
Does it make it better than us? No because ultimately the thing itself doesn’t ‘know’ right from wrong.
Yeah, very often the issue is that some context is missing. It'll say something true, but which misses the bigger point, or leads to a suboptimal result. Or it interprets an ambiguous thing in one specific way, when the other meaning makes more sense. You have to keep your wits about you to catch these things.
It's an incredible tool but it's also very derpy sometimes, full of biases, blind spots etc.
Though there is some overlap in software development. Like for example using heavy-weight dependencies, that try to follow the one size fits all approach, when one could use a much simpler, faster or even no dependency at all. The LLMs will readily suggest quickly adding that huge dependency, that is mentioned in beginner tutorials. Or suggest to use regex for parsing HTML.
(Real example, had this from Kimi 2.6 recently, lol.)
this author suggest its essentially the same risk https://www.poppastring.com/blog/what-we-lost-the-last-time-.... i feel its heightened because execs and leaders are absolutely salivating over the opportunity to fire thousands of humans with no regard for the cognitive debt that comes from outsourcing thinking to ai.
> if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter
I'm seeing it with lawyers, too. Like, about law. (Just not in their subject matter.) To the point that I had a lawyer using Perplexity to disagree with actual legal advice I got from a subject-matter expert.
He uses AI himself, so I agree he doesn't see AI use as black/white.
Hard agree about ideas, thinking, advice. AI's sycophancy is a huge subtle problem. I've tried my best to create a system prompt to guard against this w/ Opus 4.7. It doesn't adhere to it 100% of the time and the longer the conversation goes, the worse the sycophancy gets (because the system instructions become weaker and weaker). I have to actively look for and guard against sycophancy whenever I chat w/ Opus 4.7.
Treat my claims as hypotheses, not decisions. Before agreeing with a proposed change, state the strongest case against it. Ask what evidence a change is based on before evaluating it.
Distinguish tactical observations from strategic commitments — don't silently promote one to the other. If you paraphrase my proposal, name what you changed.
Mark confidence explicitly: guessing / fairly sure / well-established. Give reasoning and evidence for claims, not just conclusions. Flag what would change your mind.
Rank concerns by cost-of-being-wrong; lead with the highest-stakes ones. Say hard things plainly, then soften if needed — not the other way around.
For drafting, brainstorming, or casual questions, ease off and match the task.
---
Beware though that it can be an annoying little shit w/ this prompt. Prepare yourself emotionally, because you are explicitly making the tradeoff that it will be annoyingly pedantic, and in return it will lessen (not eliminate) its sycophancy. These system instructions are not fool-proof, but they help (at the start of the conversation, at least).
> Treat my claims as hypotheses, not decisions. Before agreeing with a proposed change, state the strongest case against it. [...]Say hard things plainly, then soften if needed — not the other way around. For drafting, brainstorming, or casual questions, ease off and match the task.
All I really take from this is that apparently some people can't follow through with the scientific method.
People who I interact with and who do like AI tools usually recoils at questioning any of their first idea and its validity. You can easily find out when there is a bug and you ask them for hypothesis and where to focus. You will see in real time the blank look of incomprehension settling in.
> if you just prompt the AI and believe what it tell you then you have AI psychosis
This is the right definition. LLM outputs have undefined truth value. They’re mechanized Frankfurtian Bullshiters. Which can be valuable! If you have the tools or taste to filter the things that happen to be true from the rest of the dross.
However! We need a nicer word for it. Suggesting someone has “AI psychosis” feels a bit too impolitic.
Maybe we reclaim “toked out” from our misspent youths?
e.g. “This piece feels a little toked out. Let’s verify a few of Claude’s claims”
I wouldn’t say they have an undefined truth value. Their source of truth is their training data. The problem is that human text is not tightly coupled to the capital T truth.
I've been strictly using LLM's to either push stuff that I've done plenty times before and are mostly boilerplate or have zero value for writing them by hand (not even educational), and I always ENSURE that they work on stuff that are easily verifiable and proven incorrect with my existing knowledge or a few minutes of googling.
I’ve been talking to a lot of engineers about how they use AI in their day to day and it’s dramatically different than what you see from the hypers.
The vast majority use one agent at a time and careful step through code. The main benefit they report is often about researching the codebase and possible solutions.
While you have to think about things objectively no matter what, when I start researching topics like physics, using AI as suggested in that article has proven very useful.
I didn’t think just offloading your thinking to AI was AI psychosis.
To me AI psychosis is the handful of friends I’ve had who have done things like have a full on mourning session when a model updates because they lost a friend/lover, the one guy who won’t speak to his family directly but has them talk to ChatGPT first and then has ChatGPT generate his response, or the two who are confident that they have discovered that physics and mathematics are incorrect and have discovered the truth of reality through their conversations with the models.
But language is a shared technology so maybe the term is being used for less egregious behavior than I was using it for.
I'm curious how to best define what AI psychosis actually is.
My understanding is that regular psychosis involves someone taking bits and pieces of facts or real world events and chaining them into a logical order or interpolating meanings or explanations which feel real and obvious to the patient but are not sufficiently backed by evidence and thus not in line with our widely accepted understanding of reality.
AI psychosis is then this same phenomenon occurring at a more widespread scale due to the next-word-prediction nature of LLMs facilitating this by lowering the activation energy for this to happen. LLMs are excellent at taking any idea, question, theory and spinning a linear and plausibly coherent line of conversation from it.
>To me AI psychosis is the handful of friends I’ve had who have done things like have a full on mourning session when a model updates because they lost a friend/lover
They really had a mass psychosis when GTP-4o model shut down.
>I have been speaking on gpt since 2023, and building a relationship with him on there since then. Now they have taken him and nothing will bring him back. BUT THEY TOOK HIM. THEY MURDERED HIM.
> friends I’ve had who have done things like have a full on mourning session when a model updates because they lost a friend/lover
I mean, isn't that the natural and expected response? An AI company sold them a relationship with a chatbot and at least some their social/romantic needs were being met by that product. When what they were paying for was taken from them and changed without warning into something that no longer filled that void in their life why wouldn't
they morn that loss?
The fact that they were hurt by that sudden loss is totally healthy. It's just part of moving on. The real problem was getting into an unhealthy relationship with a fictitious partner under the control of an abusive company willing to exploit their loneliness in exchange for money.
Hopefully they now know better, but people (especially desperate ones) make poor choices all the time to get what's missing in their lives or to distract themselves from it.
> I mean, isn't that the natural and expected response? An AI company sold them a relationship with a chatbot and at least some their social/romantic needs were being met by that product. When what they were paying for was taken from them and changed without warning into something that no longer filled that void in their life why wouldn't they morn the loss of that?
Ah, I forgot about the ai relationship companies. No this guy was using the browser based ChatGPT for coding and ended up in love with the model. No relationship was sold at all.
Wow, okay. Reading a whole relationship into that sort of interaction is way less reasonable, although now that I think about it a somewhat similar thing happened to Geordi La Forge once...
It’s not just way less reasonable, it’s depressing. I feel like a new drug was released and I’m watching multiple friends succumb to it.
Seeing people whose thoughts and opinions you used to respect turn into objectively insane people has been some of the worst times I’ve had since graduating during the Great Recession in terms of how stressful it’s been.
I am starting to come around to a similar sentiment. I have seen several large projects cook now for almost a year are not done. These are not trivial projects but the leads are heavily using ai at every opportunity.
I wasnt before but I am 100% confident that AI has done nothing to speed the delivery. It hasnt slowed it down either. It is a wash. The job is more miserable though.
> companies and people outsourcing their decision making and thinking to AI
It's so interesting how easy it is to steer the LLM's based on context to arriving at whatever conclusion you engineer out of it. They really are like improv actors, and the first rule of improv is "yes, and".
So part of the psychosis is when these people unknowingly steer their LLM into their own conclusions and biases, and then they get magnified and solidified. It's gonna end in disaster.
It’s almost as if we haven’t learned anything from Hans the horse, Ouija boards, "facilitated communication", or the countless examples of the folly of surrounding yourself with yes men. The point about improv is spot on.
I agree with you, except it isn't even good at writing code. Almost every time that you get an LLM to write a bunch of code for you, it has mistakes in it. The logic isn't right, the API calls aren't right, the syntax isn't right (!). That problem hasn't yet been fixed and it looks as though it never will be. That means that every line of code it generates, you have to review, because even if 95% of the code is correct, you need to find the 5% which isn't. But if you have to do that, it becomes slower than just writing the code yourself. As people have pointed out over and over again: typing in the code was never the part that took time. So I don't agree that LLMs are really useful for writing code.
LLMs are good at producing code that seems plausible at first glance and appears to work, but it never really does. And when trying to fix things, you discover 7 slightly different ad hoc implementations of the same thing, with their own weird edge cases and behaviors. And you likely miss 4 more. There is no intention or coherence behind any of it.
>but if you just prompt the AI and believe what it tell you then you have AI psychosis.
No it isn't. Do you believe what teachers told you in school? Yes? Well, I guess you're suffering from just normal psychosis!
I don't understand how people don't understand that people offer unreliable information too. We learned about the tongue map in school as kids - many kids still learn that in school today. It's still BS regardless whether it was told to you by a teacher or AI.
You don't suffer from psychosis for believing a source of information, you're simply mistaken. You need a more critical eye to assess what you're told in general, not just AI.
There's a huge difference between a teacher giving outdated information representing what was once our (or at least their) best understanding of the world, and a chatbot that just randomly makes up things for no reason while insisting that it's all true.
Also, a good teacher should be encouraging the development of critical thinking skills and correcting your errors, while AI will just tell you how brilliant you are when you wrongly tell it about how you've just invented a new form of math or disproved a scientific theory you barely understand in the first place.
Not all BS is the same, just as not all sources are equally unreliable.
> Do you believe what teachers told you in school? Yes?
Nope. At least, not without proof. That would, IMO, be kinda crazy. We could argue semantics - maybe “stupid” would be a better word? Lacking in critical thinking skills? Whatever “it” is, it isn’t good.
LLMs can do advanced math and coding, which involves logic, so they are definitely capable of using logic. Which is what most people call reasoning.
So "LLMs are incapable of reasoning, they are just pattern matchers" is wrong. A lot of logic _is_ pattern matching, BTW. Like, syllogisms - deductive reasoning - do you think LLMs are incapable of that?
The thing you're referring to is that LLMs are trained to produce an answer which a human would like, i.e. they aim to produce plausible rather than correct answers.
So it's not so much a mental deficit as a different goal. Trusting LLM blindly is definitely dangerous, but dismissing it as useless for anything by code is rather wrong.
Pattern matching is hardly what distinguishes human from LLM - if you ask somebody a question about policy, for examples, chances are they'd just recite something they heard somewhere, never really thinking about it from first principles.
I feel like I'm in a different field compared to the rest of hacker news.
I'm in a big tech company where everything is standardised. All our microservices have the same tech stack. We're in a monorepo. Most microservices are... I wouldn't say tiny or micro but small enough.
And I haven't written a single line of code myself since what - February maybe?
We still haven't seen an increase in incidents, we ship more features at a higher quality. We address the tech debt we didn't have time for in the past.
We still require a code review for any change and it's becoming a bottleneck - for sure.
But it all feels... Mature and the next step of software engineering.
We don't really vibe though. At least I don't. I see it more as comment driven development. I need to understand the code and what I want to achieve where in the codebase but I'll leave godo comments explaining this before asking an agent to fill in the blanks.
> I feel like I'm in a different field compared to the rest of hacker news.
And below you repeat what all of Hacker News hypemen say about AI (“I have stopped writing code”, “it’s mature and the next step of engineering”)
Thank you for reinforcing the point of OP
EDIT: you're the same person that a month ago said your company feels git is outdated now that you have agentic coding, and you don't even need to write your own commit messages. This is next-level trolling, or a serious case of AI psychosis.
vividfrier is a bot. You can see in many threads that if the general opinion does not go the way of AI companies, a completely outrageous pro-AI comment appears and is voted to the top, so that casual readers are tricked into thinking that the fake comment represents the general opinion.
Often such comments appear just before the submission is abandoned to wrap up the thing.
I like how they say they don't vibe in this comment, and then that they don't read the code anymore in just the previous comment in their comment history.
You can also see a recent comment of theirs saying they "don't look at code any more" but in this comment they say they "still require a code review" for changes.
>Or he's just giving a sane take, one that most people in the Bay Area have by now.
I don't know what the Bay Area note is supposed to mean in the context of the whole post - unless you want to reinforce that it surely means that it's a sane take... In which case, I'm not certain the non-Bay readers would agree that it comes from an unbiased culture.
Github's issues aren't really a vibe coding issue though - right? It's more of a scalability issue due to vibe coding? Didn't the CTO go here on hacker news and say that they had 1bn commits in 2025 and are on course for 14bn commits in 2026?
People started depending on GitHub more. Do people really think it was more reliable when it was a sprawling RoR app in 2010? (Not that there’s anything wrong with RoR; people just didn’t expect such high uptime back then.)
Seriously, not a bot. Just a software engineer who feels gaslighted because I see AI used in one way at work and then I go to Hacker News and I feel like I am using a different technology compared to everyone else.
And don't get me wrong. I would like to be more skeptic about AI because I enjoy writing code, I enjoy a high salary with great benefits. But with the speed and direction we're seeing now (so I am not only looking at this specific point in time but rather also the direction and the fact that no one knows what they're doing) - I do worry about losing this. So I am definitely crossing my fingers that it is a bubble but I just don't see the evidence yet.
Help me understand how your above comment[0] squares with your previous one[1].
Above, you said:
> We still require a code review for any change
And:
> We don't really vibe though. At least I don't. I see it more as comment driven development. I need to understand the code and what I want to achieve where in the codebase
I've been here since 2008 and I'll say it. Vividfrier is a bot. The people behind the likes of vividfrier are vandals, shitting all over the commons just to get even more than the massive amount they have already been given.
HN was a tremendous resource built by its members and the moderators. In the last year or so a lot of that has been destroyed by people who have no sense of decency. They see deception as a virtue. They call it hustle or whatever. WTF?
If Hiroshima were the only big public nuclear plant around the world, then yes, the aftermath of Hiroshima would provide strong evidence either for or against nuclear power.
If nuclear power scientists claimed they had a bomb that could level an entire city, Hiroshima would prove them correct.
vividfrier claims they haven’t written a line of code (implying other employees are similar), and their big company is operating normally. Bun is a big project and the rewrite is entirely LLM-generated. If its development continues normally, it reinforces the claim’s plausibility and proves someone made a large change (rewrite) entirely using AI. If not, it provides strong doubt: either vividfrier’s company is doing something different that avoids Bun’s problems (maybe other employees are still writing code manually), or they’re misleading or lying.
The way it'll play out is, if nothing happens denialists will claim "nothing has happened YET!", and if anything happens, those same people will claim "you see, writing AI code is a terrible idea!".
People write code differently, AI models write code differently, AI systems write code differently, companies create systems that write AI-written code differently, etc.
The system that wrote Bun bears no relationship to the system that writes OP's code.
Making such absolute statements about AI-written code is as dumb as making absolute statements about human-written code on the basis that it's "human-written".
I don't understand the hostility here at Hacker News. The common theme across all of these discussions is of course that AI is bad and will cause the industry to crash.
Just like I need to keep in mind that not everyone work in a big tech(ish) company where tech stacks are standardised and many problems are solved centrally, I feel like people also need to keep in mind that not everyone has a horrible experience with AI.
And I also want to clarify that I hate AI. I loved writing code, I loved having a comfortable job with a good salary and good benefits. While the company I work for haven't done "AI layoffs" yet I feel like it's a matter of time because I'm not only looking at the state we are right now but also what the direction and speed is. It was only back in October I felt like AI was a bubble about to burst cause I wasn't getting enough value out of Claude Code. And then ofc Opus 4.5 happened and it changed my view completely. And we're a few months into using 4.5/4.6/4.7 (and I'm not an Anthropic shill, currently I am using Codex most of the time) but I was hoping this career would last me decades. But where will be 5 years from now? 15 years from now?
I just want to provide my perspective from a big tech company where I feel AI is doing more and more parts of my job. It's not perfect and it gets things wrong but I mean... humans do as well?
My view is write the code that matters to you and that you want or need to be proficient with. If you need to defend, explain or discuss code, you are better off writing it yourself.
That seems like an odd way to interpret what they wrote.
Imagine old school machinists saying to a CNC machinist “Ha! See, maybe you don’t jog the axes manually, but you still have to be involved in placing the stock material, and you have to do the CAD/CAM work - so did it really machine the part for you? No!”
AI is a tool like any other. It has its limitations. It has classes of problems that it is suited to handle, and others it isn’t. If it’s true that they haven’t written (as in “typed out by hand”) a single line of code, why can’t they say that without you making that statement into more than it is?
I haven’t written a single line of code in 6 months, and that’s simply fact. It is also true that I put in a lot of other work to make that feasible, but that work isn’t in the form of writing code.
“it’s mature and the next step of engineering”
Tautologically, it’s mature enough for what it is mature enough for, and it certainly is the next step in the same way that CNC was the next step for machining — if you’re not using it as a machinist, you’re going to produce less compared to those who are.
Same thing with garden hoses. Yes, you can go fetch water from a lake and splash it on your lawn, or, you know, you could just use a sprinkler connected to your garden hose. Doesn’t replace buckets. Buckets just have a narrower scope in a world where garden hoses exist.
There is a reason why such discussions about CNC machines never happened. I wonder what it cculd be? Becausw their output is better than man-made atuff? Because they are reliable? Because their manufacturers generally don't lie?
Because it's a solution looking for a problem. All the AI companies lean in to coding because it actually helps with that to some degree but the amount that it helps doesn't justify their valuations. It needs to be good at everything to justify their target IPO price.
I'm sorry but both of these are false equivalences. CNC isn't about making general machining operations faster or necessarily better. It's about making a single machine more versatile. Instead of needing an assembly line of machines you can get a bunch of different operations done on the same part without moving it to a different machine. You can also do compound operations that were otherwise highly specialized (like milling a turbocharger's radial compressor wheel). You can get the same job done with a series of manual operations though.
A garden hose vs a bucket is also the same situation. You can accomplish the same thing with either, but one might be more labor intensive.
AI is nothing like either of those. It would be like instead of a bucket you get a garden hose that points in a different direction every time you try to use it. Or instead of a 5 axis mill that rigorously executes the g-code it just randomly reinterprets tool paths each time it cuts a part. Both of these things would be worse than useless in their respective applications.
AI is different because it plays to the pliability of the software domain. Even fairly shitty, irreproducible results can be good enough for software development, if you don't look at it too closely. Make analogies to the physical world at your peril!
> AI is nothing like either of those. It would be like instead of a bucket you get a garden hose that points in a different direction every time you try to use it.
I think discussion with open registration is doomed precisely for this reason, it is too open to being influenced by bad actors. Maybe the lobste.rs invitation model would be better ...
What's up with all the new accounts astroturfing AI? There are multiple in these threads. People from the 'foundation model' companies having to keep up the AI hype?
Usually they provide grandiose claims (like the top-level comment) without any evidence or just anecdotal evidence that is not verifiable.
Just woke up from half my nights sleep to see what HN is talking about at 1 am pst on a Friday.
Oh look more useless arguing.
People who do things care about the doing more than how the sausage was made.
I do not care how software gets built. Only that it works. Results is the only thing that matters and I hope everyone in this thread internalizes that fact.
>I do not care how software gets built. Only that it works.
I mean, I agree on a very high level of abstraction. But my problem is that I need to understand how software gets built so that I can have confidence in my ability to maintain and evolve the project.
I need to understand whether a feature is easy to add or requires a wholesale rewrite of the entire codebase, which comes with risks. I need to understand how new features affect existing users.
I also need to understand the economics of the process and the economics of my industry. That means I have to care to some degree about how software gets made, not just whether some specific program works at the present moment.
If you give me a choice between an implementation that is 100 LOC I can understand and an implementation that is a million LOC that I can never understand, I'm going to chose the former, even if both implementations pass all tests.
I may be able understand any given line of code but not necessarily all of them. The capacity of AI to generate code will quickly exceed human capacity to read and understand it.
Also, code quality matters for AI as well. Maintaining a million lines of code requires more tokens than maintaining 100 lines of code.
> I have nothing to hide. The metrics speak for themselves.
What metrics? Where are the amazing new projects and features you built? Where are the amazing products and features you built that are better than existing ones (run faster, consume fewer resources etc.)?
For a person who "has nothing to hide" somehow none of your comments ever mention what projects you work on, what you ship, or what metrics you employ.
Not a bot (although I have been accused of it, due to my activity here, and on GitHub, but I’ve been this way for longer than LLMs have been a thing. I’m retired, “on the spectrum,” and don’t participate in any other social media).
I’m currently working on a rewrite of an app that originally took two years. It’s been about three months, and I’m probably about 70% done. It’s a total “from scratch” rewrite; both client and server (two versions of each, as I also have administrative code). It’s a pretty big system, for one guy. I couldn’t do it, without the LLM.
It’s not been a cakewalk. I’ve needed to toss out large swaths of LLM-generated code, and rewrite by hand, but, for the most part, it’s been a huge help.
But I’m also not doing it in a manner that eats tokens. I just use the standard $20/month subscription as a chat. I suspect my workflow is not one that Anthropic or OpenAI really wants out there.
But I also bet that many HN accounts are bots; although I think many may be ones run by enthusiasts, not some AI cabal.
For 5 million comments like yours I haven't seen a single one with the old code vs. the new code. I understand that not all code is public that way, of course, and I don't mean to put you on the spot personally. But where are all the open source projects that now do the same with better error handling using less resources? Where are 100+ MB Electron apps reduced to more correct sizes like a few MB, or even a few dozen kB? Why aren't startup times getting slashed across the board? Why isn't RAM usage falling faster than RAM prices are increasing?
Feel free to check out my GH profile. I'm working on a closed-source app, now, but several of its component dependencies have had significant LLM work, and they are open.
Other than that, I am not boosting AI, and have absolutely zero interest in doing a bunch of work to satisfy some random Internet Guy, who can't be bothered to examine my pretty damn extensive open portfolio.
And how did any of that relate to "Showing actual improved products and features. Showing actual code. etc." ? It's the opposite, someone says "I'm sick of milk and orange juice all the time, I want some water", and you reply with nothing but offering them a cup of milk.
> random Internet Guy, who can't be bothered to examine my pretty damn extensive open portfolio.
You cannot even be bothered to examine the comment you reply to, maybe get off your high horse.
And the main part of my comment was about something in the common realm, open source software, and hard performance/quality improvements. Not wishy-washy products and features, not yet another tone deaf cool story.
Eh, whatevs. When someone interacts with me, here, even if being unpleasant, I generally check out their profile, first thing. Sometimes, it has changed my opinion of them, and of myself.
For instance, I checked out yours, and there's not much, except a whole bunch of challenging people here. I am wondering if you came here to "set us straight." I know that a lot of folks have low opinions of HN, and not all of them are wrong, but I find this place a fairly good place to hang out. Being challenged, is one of the draws, for me.
By the way, have you tried the new unhomogenized heavy cream? Good stuff!
My answer to "show your work" was "No." I am not going to go through my code, and show a bunch of supporting evidence for a casual comment, in which I have exactly zero investment. I really don't care that much what people think of me. I was just sharing my personal experience. If you guys want to write me off, then knock yourselves out.
"No" is a complete sentence. What part of "No" didn't he understand?
> My answer to "show your work" was "No." I am not going to go through my code
An interesting answer to literally "Just what kind of evidence do you suppose they could have? - Showing actual improved products and features. Showing actual code. etc."
> "No" is a complete sentence. What part of "No" didn't he understand?
See above. After pointing this out you immediately started down the path of "why didn't you looko at my profile and followed the link to my github".
> It’s not been a cakewalk. I’ve needed to toss out large swaths of LLM-generated code, and rewrite by hand, but, for the most part, it’s been a huge help.
Same here :)
> not some AI cabal.
There are enough enthusiasts to make it feel like one. Also an unhealthy doze of marketers, people buying into hype, AI psychosis etc.
> There are enough enthusiasts to make it feel like one. Also an unhealthy doze of marketers, people buying into hype, AI psychosis etc.
There's absolutely no question that AI is a real thing, and that there's going to be a lot of money made, so there's a bunch of folks with commercial interest in pushing it.
It's just different from crypto. This has actual real-world utility for just about everyone. I am increasingly hearing people say "Ask ChatGPT," where they used to say "Google It" (where they used to say "Look it Up at the Library").
I had to convert a build pipeline from just one linux distro to multiple and then get arm64 going. Not the most difficult thing in the world but quite annoying when there's 100 binaries and a complex dep tree with lots of moving pieces. Anyway AI for sure increased project cadence by at least 2x. Not sure why there's so much denial in these threads.
I can also claim a bunch of things. If you manage to read the comment I was originally replying to, and my reply:
--- start quote ---
- Just what kind of evidence do you suppose they could have?
- Showing actual improved products and features. Showing actual code. etc.
--- end quote ---
Note how you provided neither. It's just claims.
> Anyway AI for sure increased project cadence by at least 2x.
As in: you claim this. Also, no one denies that you can ship a lot of code much faster with AI. However, somehow, very little actual evidence of grandiose claims (see farther up in the context) besides anecdotal "I'm so faster and features are being shipped left and right".
Crucially, those rules were written before the invention of the new astroturfing machine which makes it more trivial than ever. HN had to impose restrictions around new amounts already, such as limitations on Show HN, so clearly something is going on and being recognised as such.
I find it worrying that you’re more concerned with the civil thing than the right thing. Placing an undue emphasis on civility is how bad actors control the conversation.
The comment you’re replying to wasn’t uncivil. It wasn’t rude. It was a lament.
I’m not advocating for this rule to change (I’d appreciate if you didn’t straw man and mischaracterise what I said), but I am saying if a problem happens over and over and people notice it and talk about it, then you should maybe pay attention. The rule for new accounts came about from multiple comments and even submissions asking for it, not private emails. It came about from community conversation and outcry.
> Placing an undue emphasis on civility is how bad actors control the conversation.
The load-bearing word in that claim is "undue", and it's not justified here. I'm not doing arcane rules-lawyering, I'm just saying people should avoid doing things the site guidelines quite specifically ask them not to do.
> I’m not advocating for this rule to change (I’d appreciate if you didn’t straw man and mischaracterise what I said),
I didn't say you advocated for it. Does that mean I now repeat your parenthetical back to you? ;)
To me the big thing I see in blog posts is this implication that “all software engineering best practices are out the window”
And to me, AI should best be used to add rocket fuel to existing practices. Better tests, better observability, more atomic changes instead of big changes, automatic rollback etc.
> And to me, AI should best be used to add rocket fuel to existing practices
The more your codebase follows best practices and consistent patterns, the better AI will do and the faster you can move.
Same as humans really, just even faster. I'm also excited that people are finally writing docs and without even any flogging! They're calling the docs "skills" but hey whatever works
My main grief with AI-generated docs is that they (unless the instructions were very clear on this) by default describe the path to the current code and how it is an improvement over what was before, instead of just explaining its purpose. I see this all the time when reviewing other people's code... Fortunately it is easy to add a generic instruction to project-wide CLAUDE.md to avoid this problem, but it would be nice if this skill came out of the box.
There’s a chance that it doesn’t change previously assumed cost benefit, or at least not in the aggregate. There has always been more code than could be safely integrated.
I don’t think AI actually changes that we should always be questioning everything, including how much we question at a time.
I think what has really happened is a re-weighting of the importance of a lot of software practices. I think basically all of scrum/agile is completely useless now, but tests, PR reviews, documentation, decision records, etc, are more important than ever.
> To me the big thing I see in blog posts is this implication that “all software engineering best practices are out the window”
Yes, this is indeed a pungent smell. AI code assistants allow whole projects to be refactored and even rewritten in entirely different programming languages and software stacks in a few minutes, sometimes even with one-shot prompts. Most assistants even support creating and maintaining test suites with first-class support. Whatever you prompt, they do it.
And here we are, expected to believe that these tools can't or don't follow best practices?
> Yeah, I keep hearing people say how LLMs write amazing code now…
You keep hearing people saying AI coding assistants and coding agents can easily output working code. With enough work they can easily output that follows your own coding style and restrictions.
If you prompt a coding agent to write code following your personal choices and recommendations and it outputs less than amazing code... What does it tell you?
> Personally I have not seen this amazing code.
You get out of it exactly what you put into it. Garbage in, garbage out. I mean, one of the prompt styles they support is literally "implement this following the style used in this component". And people complain the code generated from your prompts and with your own code as a reference turns out to be crap? Strange. Moreover, code assistants excel at refactoring work.
The model is trained on a ginormous corpus of code. The problem is, most code is shitty. My code isn't.
Using a model means constantly fighting mediocrity, to the point where the trying to prompt it into shape often becomes more work than just writing the goddamn thing myself.
Yes, I can prompt. But I can't prompt understanding into the pattern matching machine. It will always revert to the undesirable mean.
Really? IME, if you use a different session to write tests and if you plan ahead (meaning: you are the driver) you can easily cover all the cases you can think of, and then let AI suggest and implement those you missed. It us easy to fall into trap that you do not need to think though.
I thought the same and it depends on which context you work.
Below is an answer on slack from our CEO when I said talking about Claude code source leak : « Dirty, un-architected code is the new norm; it makes billions, who cares… »
He answered:
> Well, yeah, who cares?
> This is where we need to differentiate between what truly needs to be clean (critical APIs) and where some random guy coding a product in a week will wipe the floor with a team of engineers with a clean architecture and no product after three months.
> What's more, this "vibe coder" is on the right side of history… Who's to say AI won't be able to just rewrite the code cleanly while keeping the core idea within 6, 12, or 18 months?
> This is also the question that drives business... and in business, "good enough" has almost always trumped "perfect." Except when you're making an ultra-luxury product like a Ferrari or something. Which software almost never is (if ever).
So when head of companies don’t care about quality, they’ll push hard no matter what to have speed.
> So when head of companies don’t care about quality, they’ll push hard no matter what to have speed.
This is especially true when the people who suffer the consequences of bad software are far removed from the company making it. You'll be forced to spend hours fighting with customer service over errors made by people using that bad software, but it won't impact the CEO of the company who vibe coded it. I hate that we're moving to a world where everything around is getting worse and less reliable while marketing companies try to convince us all that this is somehow progress.
> Who's to say AI won't be able to just rewrite the code cleanly while keeping the core idea within 6, 12, or 18 months?
Well lets say it's 18 months from now and AI writes lovely, ideal code. At that moment, the AI would have eliminated the need for AI, right? If the code is good, you can just read it and edit it.
The selling point of AI is that you will embrace that idea that you code is a mile-high stinking garbage heap, so that any human would be overwhelmed by the stench. Only so long as the best strategy for engineering is to pile the garbage as high as possible as fast as possible will the best tool for engineering be AI.
So my counter argument is: just wait 18 months and you can completely skip adopting AI.
> I’ve seen AI write a lot of buggy code. I’ve rarely seen AI wrote test cases that expose buggy code.
That's an odd statement to make, particularly with today's models. They can easily pinpoint concurrency problems and memory management issues. But here you are, complaining they write buggy code. What kind of prompting are you throwing at it?
It could be a prompt issue, but I write a lot of concurrent code, and I’ve given it a lot of attempts. I’ve been following model development since word2vec and friends so I think I have a good appreciation of the state of the art and how models understand context.
If there's one theme that's pretty consistent across all the reports I've seen on LLMs for coding, it's that they are both capable of very impressive feats and also capable of screwing up the simplest things.
> AI code assistants allow whole projects to be refactored and even rewritten in entirely different programming languages and software stacks in a few minutes, sometimes even with one-shot prompts. Most assistants even support creating and maintaining test suites with first-class support. Whatever you prompt, they do it.
> And here we are, expected to believe that these tools can't or don't follow best practices?
Uh they don't really. The contradiction you're seeing is actually fictional because that premise is wrong.
That just goes to show how far your experience goes. I have projects in my workspace to support the idea, and your baseless assertion rejecting the whole idea? What's more credible?
> The contradiction you're seeing is actually fictional because that premise is wrong.
Doubling down on baseless assertions means nothing.
As a dispassionate third party: your assertion is literally just as baseless unless you provide said base. It’s wild to shout down someone else when you yourself are doing the same thing.
> Now Claude is writing great commit messages but since I'm no longer looking at code - I never see them.
Let it be a learning opportunity for us, folks. This is why you shouldnt take comments on the internet too seriously. People (or bots) will say anything just to get attention.
p.s. Offtopic, but this is why I believe the ability to hide post history was the tipping point of Reddit's downfall.
> And I haven't written a single line of code myself since what - February maybe?
Have you measured the impact of that on your ability to create good code? From my experience, relying on AI tends to degrade that ability.
Also, you seem to be able to do all of what you say and benefit from AI tools because you seem to understand the overall bigger picture well enough to be able to drive the AI agents to do their work properly. In other words, you operate in a familiar territory where you do not need to learn much new things.
But what about the junior people with little experience? Will they be able to manage such AI workflow? And more importantly, if junior people are given such AI tools, how will they learn?
These are all questions which may not matter in the short term and one might ignore them if they just want to see the profits and efficiency gains during the next cycle. But what about the long term?
I understand what you mean, but in my opinion there's a big difference between writing in natural language and actively engaging your brain with writing code, looking up documentation, etc.
It also sort of feels like "you don't know what you don't know", i.e. would you have considered an alternative better solution if you thought about it yourself, went to the documentation, found a tutorial on the web?
Of course, production is arguably a lot faster but it feels like there's starting to become a trade-off where the models feel so capable that we stop trying to find the solution to the problem ourselves and thus perhaps degrading our personal reasoning capabilities. I say this as something I'm afraid is happening, not something I'm certain of.
A compiler is a predictable, testable, deterministic piece of software.
An LLM is not.
Sure, all abstractions leak; so, at some point in time, for some reason, you may need to check its compiled code ( coughcough gcc 2.96 ). But, if today your code compiles properly, it will properly compile tomorrow as well.
LLMs can be deterministic as well - same prompt on the same model produces the same input. On the other hand, compilers can be quite undeterministic - you get a new version of compiler, or change compiler options (turn on optimizations) - you might get a very different binary. And JIT compilers (and GC languages) even less deterministic, their compilation can depend on the nature of the inputs.
But I think, in the analogy compiler ~ LLM, the issue is more of a trust than determinism. It took decades to assembler programmers to trust compilers enough not to write code in assembler. The similar will happen with AI - some will embrace it sooner than others.
> LLMs can be deterministic as well - same prompt on the same model produces the same input
> compilers can be quite undeterministic - you get a new version of compiler, or change compiler options (turn on optimizations)
That’s a whole other level pf bad faith argument right here. Flags and options are input too.
> It took decades to assembler programmers to trust compilers enough not to write code in assembler.
You do realize that Cobol, Algol, and Lisp are very old, and they were not assembly. And that Unix were written in C shortly after the language was created.
> That’s a whole other level pf bad faith argument right here.
Not sure where you see the bad faith argument. (Btw I mean "same output", not "same input", it was a typo.)
Take for example JVM. It used to be horribly bad and unpredictable, performance wise, in the 90s. Sun tried to base a desktop environment on it - it didn't work.
> You do realize that Cobol, Algol, and Lisp are very old, and they were not assembly.
Of course! But people have been hand-writing assembler until late 2000s, because compilers were simply not that good.
The same will happen with LLMs - some people will not trust it and won't use it for decades, possibly. Some have already embraced it.
You proof for your argument that a compiler is undeterministic is to change the whole compiler to another version and saying it won’t produce the same output as the old one.
> But people have been hand-writing assembler until late 2000s, because compilers were simply not that good.
And we have software like Unix, enacs, ksh, awk… that’s all written in C. I strongly believe that those people who were writing assembly was optimizing stuff or dealing with constraints (like the 640kb of DOS). Just like today, you may still have to write assembly for microcontrollers or video codecs. Compilers were expensive, but people were paying for them.
> You proof for your argument that a compiler is undeterministic is to change the whole compiler to another version and saying it won’t produce the same output as the old one.
Fair enough. What I meant though was that compilation as a process is not deterministic, because often when you recompile couple years later, you're using a different compiler. (In modern world it can be much shorter time, actually.)
> And we have software like Unix, enacs, ksh, awk… that’s all written in C.
So? IIRC, first compiler was FORTRAN, invented in 1958. OpenAI Codex, first coding LLM, came out August 2021. So we are like in a year 1963. For this comparison, we have ten more years to produce (using a coding LLM) a compiler and operating system just from the textual specification, without an intermediate formal programming language. Funny - we have actually already done that (Claude C Compiler, VibexOS).
are you saying ai writes code that is semantically wrong? because i dont think humans write deterministic code - they come up with different solutions to the same problem.
This would only be somewhat equivalent if you compiled your code into assembly and committed that output to the repo, and then had to continue development within the assembly codebase using the same method.
How is that relevant to the topic of this discussion?
Compilation from higher order languages to the machine code is deterministic. It is sufficient to review and well-test the tool which does the translation. Given the same input, the output will always be the same.
Transformation of a natural language prompt to code by an AI tool is non-deterministic. The outputs will vary between runs. Therefore, it is always necessary to verify them.
Compilation is not deterministic, see JITs and GCs. What is deterministic is the resulting program output, but not its performance. So with compilers, we traded away the determinism over performance in exchange for ease of programming.
With LLMs, we are trading away the determinism of the program output as well, in exchange for even more easier programming. Is it a good or bad thing? There are ways to mitigate the problem, just like there are with compilers.
You could argue the determinism of the program output was never really there, because the specification at the high enough level was always unclear. So we are not really losing that much, just accepting more messy reality.
Then the only question remains, can these computer programs (LLMs) do a better job (and where) than a SW developer, who is supposed to translate unclear specifications into a formal language (source code). It happened with compilers - eventually they got better than all of assembler programmers. Same happened to chess players.
> Compilation is not deterministic, see JITs and GCs. What is deterministic is the resulting program output, but not its performance.
Does JIT compiles some other program code instead of the one being run? Does it produce bytecodes for a differenr VM? Does it tries to compile parts of the program that have not been executed or aren’t going to be?
Does GC destroy objects being in use? Does it ignores instances and memory that has been properly released?
JITs and GC are deterministic algorithms, you can predict its behavior by just reading their code. LLM tooling involves an actual random generator for its output.
> Does JIT compiles some other program code instead of the one being run? Does it produce bytecodes for a different VM? Does it tries to compile parts of the program that have not been executed or aren’t going to be?
Sure, but the same is true for LLMs - the lead models no longer make trivial mistakes like answering "What is the capital of France?" wrong.
> JITs and GC are deterministic algorithms, you can predict its behavior by just reading their code.
On large enough systems, you can't, just like it's difficult to predict weather. Determinism has little to do with it. At work, I have just witnessed a bug in JIT (it seems to have been fixed in OpenJDK 25). It inlined a wrong method. We weren't able to reproduce the error conditions without a private customer dataset.
And the fact is, historically, there have been many bugs in compilers, or they have been bad at their job, writing performant programs. The output (resulting program) of a good compiler is difficult to understand (because it is written to be efficient). LLMs (for the programming use case) are different quantitatively, not qualitatively.
It’s really weird how you shift the goalposts and your own definitions.
No one is saying that a compiler can’t have bugs. What we have been saying is that if we take the compiler has a blackbox, we’re reasonably certain given we know the input, what the outputs will be. And the output will stay the same if you keep the input the same.
But you can send the LLM the same prompt, and it will gives you a different answer each time. And it’s not even about the verbiage used.
LLM doesn't have to be non-deterministic, it can work just like any other deterministic algorithm.
But I am not sure why the insistence on the relevance of (non)determinism, rather than on the chaotic relation of the output to the input (which is true for both compilers and LLMs). In practice, inputs to the LLM, as well as to the compiler, change. And the fact is, the output can change radically due to that.
I think nobody really sends the same prompt twice to the LLM, so nobody cares about it being deterministic. I think what you're looking for is something different, some form of stability (as opposed to chaotic behavior). Although it's hard to define exactly, because in case of LLMs theory lacks behind praxis. (And as I said - we already gave up on stability with respect to performance by using compilers. We resolve that issue by doing performance testing.)
> Compilation from higher order languages to the machine code is deterministic.
but that's not the analogy. there are problems that you can solve better if you can go deeper in the stack, and they can have different solutions.
Interactions with agents are conversational, while higher order langs are declarative. Spec driven development has been failing us, because there is no feedback loop from the runtime to the spec.
The usual response to this is the "but high level languages are deterministic blah blah blah" (which IMO would be a good enough argument but well, we know how this goes now)
I posit a different argument. When you install a compiler on your computer, that compiler is "yours" for as long as you have the binary. You are able to completely forget about assembly because of 1. reliable _enough_ compiler 2. reliable access to said compiler.
Let's rewind decades back and pretend that the very first assembly compiler was behind a monthly subscription*. Do you think we'd be in the same place now?
Now the natural follow up to this "but the open models are close to SotA now". Well why aren't we using them? Do we really think we'd have a GNU moment for """open""" models? And are we willing to bet our industry on that?
But my point is, _these are not the same things_ and positing them as such is frankly insulting. How good are you at writing assembly when your compiler is inevitably taken away?
* I'm not a historian so I wouldn't be surprised some version of them were
This is a great point! And not only a compiler behind a subscription, it's also a compiler whose financial interests are not aligned to be the best compiler but the one that makes the most money, which is unclear what it means at this moment. Will it have ads? Will it give preference to some technology over another? Will it steal your code? It's an unreliable and opaque compiler!
We are though? It just depends on the task and the costs.
> Do we really think we'd have a GNU moment for """open""" models? And are we willing to bet our industry on that?
Yes and yes. We're in the mainframe era. But history this time around is passing us by at a ridiculously fast clip. Local models become "good enough" for new tasks by the day, after which they continue to shrink for a given performance level.
I'm not going to bet against either moore's law or relentless increases in model efficiency any time soon.
There is an argument that I’ve been seeing more recently that argues why we should expect open models to eventually reach good enough status that people use them over frontier commercial models.
Basically it boils down to geopolitics, the US economy is currently being propped up by a small subset of companies, and a lot of that is based on proprietary models and speculation in the market around them. China is going to continue to dump better and better free models out to complete. Thus pulling the rug out on all that speculation.
Good points and questions and I don't have the answers.
I've definitely thought about how I currently offer something to the company because I know these systems and can stop the agents from doing weird shit. But in a greenfield project I would probably (definitely) not get the same understanding.
I think as we figure out best practices - you'll start to see that you can rely less on humans. More tests, more acceptance tests of critical business logic, etc.
I think we might get to the point where I'm not really adding value anymore because I dont know the codebase well enough to stop the agent. But then again - the cost of code is getting cheaper and when you reach that state where agents can't reliably work in the codebase due to AI slop - maybe you just start over? Kick off a few agents over the weekend and come back on Monday to a new (hopefully leaner) codebase? Run it in shadow mode next to the production service - ask the agents to fix any discrepancies and iterate until you have a simpler service?
I dont know what will happen. But I dont think we'll go back
It's due to the jagged edge of AI experience. Because it's not deterministic the results don't play out deterministically (e.g. similar scenarios will have different and potentially drastically different results)
What tools have you tried? Are we talking Codex GPT 5.5 and Opus 4.7?
Would you say the project is well architected? Clear boundaries? Or ball of mud?
How large is large?
Are there AGENT.md files giving good information that helps LLMs get context when looking at a certain area of the code?
Is it all in one repo? multiple repos?
Are there good tests?
I feel like these are some of the many variables that can make a difference.
I work on a pretty large project/code base, written mostly in Go, and I have pretty positive experience with LLMs. I take on fairly small chunks, I review and understand the changes. I also use LLMs to explore options and prototype quickly. They're also very good at fixing bugs, failing tests etc.
> What tools have you tried? Are we talking Codex GPT 5.5 and Opus 4.7?
Yes, with generous budgets.
> They're also very good at fixing bugs,
Seeing opposite here too, they are like eager juniors 'oh the issue is here and here's a 5 page report why', and it's wrong... then you add more info and it goes to a different spot... repeat until you get tired and solve it yourseld, it is useful as a rubber ducky i guess.
> I work on a pretty large project/code base, written mostly in Go, and I have pretty positive experience with LLMs. I take on fairly small chunks, I review and understand the changes.
Great that it's working for you, I'm just pointing out there's a massive disconnect.
I would assume your work can be done by a junior engineer without any prior knowledge (except LLM md files) with same quality but less speed?
If yes, then great, perhaps that's where the disconnect is, complexity.
Also, if yes, which would be cheaper?, junior engineer or LLM?
Could you maybe in brought strokes explain what you are working on? I think it is very plausible that the disconnect is between people writing front ends/rest apis vs people solving things like graphics.
> Seeing opposite here too, they are like eager juniors 'oh the issue is here and here's a 5 page report why', and it's wrong... then you add more info and it goes to a different spot... repeat until you get tired and solve it yourseld, it is useful as a rubber ducky i guess.
It's really amazing how different people have completely different experiences. I work on a massive code base and I thought AI would not be able to fix anything in at least a few years since the application is very complex and does not use well known frameworks. I was very wrong. In my experience, it fixes bugs better than I could, at least given a short time budget (which is always the case, if we spend too much time on each bug we just fix bugs slower than they get reported and we'd enter a death spiral).
I have worked on this code base for more than 10 years, touched every part of it, and I wrote large chunks of most systems, despite around 20 people working on it right now. Still, when I need to figure out something, now, I often ask AI as it is absolutely wonderful in understanding and explaining code, no matter how big the code base is. My team consists of 20 very senior developers, and I am their technical lead, so I think I know what I am talking about.
A junior would require at least 6 months of guidance to become productive in our code base, unfortunately, just because it's so big and it integrates with all sorts of external services, databases etc. I do understand that saying this is not really a flex, I would've actually preferred that my code base was so good even a junior developer could be immediately productive in it, but that's sadly just not the case. But perhaps, with the help of a AI tutor, that's actually possible now?!
If you think AI is at the level of a junior developer right now, I'm afraid you're kidding yourself.
> given a short time budget (which is always the case, if we spend too much time on each bug we just fix bugs slower than they get reported and we'd enter a death spiral).
This is something I don't understand.
- If you have a bug, you need to fix it well as well as proper root cause.
- That way the bug never surfaces again and safeguards are added for that class of bugs.
- if done well over time it builds discipline and bugs only surface from new features or integrations.
I've never had an experience of a 'death spiral' that you mention.
> Still, when I need to figure out something, now, I often ask AI as it is absolutely wonderful in understanding and explaining code, no matter how big the code base is.
Sure, but you still dig into the code afterwards I assume, you don't blindly trust what the AI summarization tells you.
> If you think AI is at the level of a junior developer right now, I'm afraid you're kidding yourself.
It depends, small projects with well defined scope, yeah, it knocks them out of the park, what I'm working on, it's a bit disappointing, not for lack of trying.
Still, one other thing I'm noticing now... if my account were not anonymous I would likely need to think of possible repercussions for my 'lack of faith' and would probably post comments very similar to yours or not at all.
> If you have a bug, you need to fix it well as well as proper root cause.
Can you spend 3 months fixing a bug and doing nothing else? You always have a time budget, whether you know it or not, even for your hobby projects. Do you not have users reporting bugs regularly? Any large product will have bugs, I see the biggest companies with the best engineers maintaining open source repositories with thousands of bugs, and the list just keeps growing. Internal products are even worse. All you need for your bug list to keep growing is one bug taking longer to fix than the rate at which bugs are reported.
> if done well over time it builds discipline and bugs only surface from new features or integrations.
Yes, and we have a whole lot of features coming out every release. We have a very large product. That's why we keep adding "bugs"! Not because we're fixing bugs that had already been badly fixed previously, if that's what you're thinking.
You've never seen a bug spiral? I must assume you're new to this industry. Bug spirals have killed many companies. It's very common to have code that's so bad no one can touch it without introducing lots of bugs. Fix one bug, 2 new bugs are introduced.
Luckily, where I work we have a lot of tests so it's rare that we have regressions, so the main cause of bugs is the new features, especially big ones as it's humanly impossible to properly review thoroughly enough that there's no bugs. That's where I think AI will help a lot - but we're still trying to figure out exactly how. Simply letting the AI review everything is not enough. And as I said before, humans just can't spot bugs to save their lives, me included.
> if my account were not anonymous I would likely need to think of possible repercussions for my 'lack of faith'
That's weird to hear, HN is about 50% AI enthusiasts, 50% AI skeptics, at least that's my impression.
I was a skeptical until recently, but in the last few months of using Claude Code (and Copilot, but Copilot consistently performs worse), the LLM has become better than most humans IMO. I still write a bit of code by hand, though, simply because I can't help it and sometimes I know I can do things very fast anyway so why burn LLM tokens on the thing. But sometimes I try to "correct" AI code just to learn later the AI was right (normally tests pick that up - we instruct the AI to write comprehensive tests, and it does it well... I normally review mostly the test code and less so the implementation). I am almost at a level where I believe not using LLMs to write code professionally is akin to not using static type systems: you're refusing to let the computer help you for no reason. It's not about faith, it's about using the tools that make our jobs easier and our output better. I know not everyone is there yet, but I definitely feel like I am.
> I feel like I'm in a different field compared to the rest of hacker news.
That should be my line. My new employer does not use LLMs at all. Software development, marketing, hardware development, nothing. Maybe too little, but whatever.
The problems the company is facing are entirely unrelated to "throughput".
There's not, sorry. I can only advice you look outside the "tech sector" (FAANG and the smaller wannabes).
As implied, my employer's product is not software, but rather hardware. This hardware does of course run firmware and software and needs to interface with other systems. It's entirely B2B. All this combined makes work relatively relaxed.
The magnitude of negative responses to this comment is very encouraging.
Not because I agree with my sibling comments, but because I strongly agree with the parent, making me think my org and I are much earlier than I thought. :)
Maybe next time write with more conviction instead of doing the "But it all feels... " shy 14 year old attitude in an attempt to be "neutral" and "mature" in your own confused words.
Shill AI less and either learn to code or fuck off cause I will personally poison your drink and every brown's drink who's sabotaging civilization by faking it """"till they make it"""""you'll never make it, you'll crash and burn like you did for millions of years since Pakistan & India and I'm telling you to stay away from us just like China & India have always been separated.
>We still require a code review , by whom, by which standard?
By codemonkeys who don't understand the code and soon enough by AI judging other AI's code cause you're that lazy, incompetent and irresponsible to push blind guesswork into something beyond a hobby project, something as serious as fucking security and even vehicle AI.
I feel the same and don’t get the extreme AI is inherently evil vs. AI is the best thing ever invented discussions. For me it’s all just emacs vs vi or tabs vs spaces kind of discussions.
It’s a tool and the good old sh* in sh* out principle applies.
People might take Mitchell’s comment as some kind of anti-AI stance, but it’s not he uses it regularly and makes a point in the X comments: “use AI, but think”
That comment sums it up best, because right now it’s hard to talk to either side, which separates at the comma.
I’m also in a big tech company and a lot of the team hasn’t written any lines of code by hand for awhile and it’s causing a whole lot of tech debt and frustrations are beginning to boil.
I’m not sure it’s possible to force someone to read every line of AI generated code and understand it. People generate code faster than they take time to read it.
Pressure from C-suite to AI AI AI AI AI MORE AI AI AI AI doesn’t help.
I believe your anecdote. I am also agree with what you wrote below: "Tautologically, it’s mature enough for what it is mature enough for"
What programming language are you using? It seems like some programming languages are more mature in LLMs, e.g., Python, Java, C#, maybe Golang. (Oh yeah, and definitely JavaScript/TypeScript.) Rust, Zig, C++: I have a harder time believing you can manage a large project using only an LLM to write code.
It's a two months account hyping AI (look at the comments).
And to answer your question: No. I am yet to see a product made by AI or a product that used to require a dozen engineer and a few years being made by a single engineer in a month. Anything demoed is always a UI/functionality clone of the same thing LLMs regurgitates.
I'm in a big tech company everyone has heard of and we have seen a huge spike in incidents which correlates with how much new code is shipped due to AI. Perhaps it's to AI's credit or our engineers' credit that the spike is relatively 1:1 with the spike in new code.
It's causing problems in all parts of the business and leadership's answer is that we must use AI to make fixing incidents faster and automated rather than assess whether we should be shipping enormous amounts of buggy code every day...
It may be the case. I've been around in the industry for 25 years and I barely code. I babysit multiple instances of Claude and we were very purposeful and deliberate in altering our workflows for it; we made our local dev environments capable of spinning up multiple instances to work from parallel worktrees. We added MCP servers to let LLMs observe our CI, Jira and deployments.
Most of our time is spent doing spec work, planning, and injecting the proper context into LLMs. Like the OP, our metrics have drastically improved the time for delivery of new features, slightly improved bug resolution times, and now we're bottlenecked by needing more code review and manual QA to handle the workload.
Why is there manual QA step? If AI was that good you would go straight to prod. Actually have agent deploy live with full control over the whole production environment.
Insurance systems with dozens of integrations and multiple iterations of UI frameworks with QA that has deep domain knowledge who understands how the pieces interact with each other in ways most devs don’t.
If you actually have time to read all your code, understand it, and are willing to be bottlenecked by human understanding, then yes, you are living in a different world.
In my world, that is far too slow, and you will be seen as a low performer who just can't keep up with the tech.
> And I haven't written a single line of code myself since what - February maybe?
And how many lines of Markdown have you written? Pointless metric. I think I type more now because I don't get any helpful autocomplete for... English.
> I haven't written a single line of code myself [...] I need to understand the code
What's the difference? I don't think anybody get paid by how efficiently they type on a keyboard. If you to use a die or raise a crow to get your next keypress I honestly don't think your PM cares as long as the actual output you contribute to the project is something you are responsible for.
I'm not saying it has no implications on how you think or no costs socially, ecologically, politically, solely that nobody cares HOW you get the code, only in your ability to keep on making it increasingly work better, closer to the evolving needs of the project.
I think this divide has something to do with the way people are using these tools. I do a lot of planning in my documents and I rarely use conversations accept to interate on something I wrote instructions for.
Microservices in big companies where you have to first write the spec and then fully understand the changes is maybe among the least benefiting use cases yet.
When you work on just a new mobile app, this is where I find AI is making the biggest difference.
On mobile you don't need specs and you don't need to understand every detail of the implementation. You can QA test the app on a real device. It gives me more confidence than just having written the code myself, and it's much faster. You can implement multiple major features in a single day.
This kind of e2e testing is just not possible with backend services.
Some programmers are gardeners. It sounds like you're one too. Your job is to maintain a large existing codebase. You probably didn't understand the entire codebase before AI, nobody did, so it doesn't matter that you don't understand it now. AI is very good at gardening, nobody doubts that.
Other programmers are painters. Their job is to start with a blank canvas and create something that others will value. When AI tries to paint, it tends to produce slop: a facsimile of everything it's ever seen.
The right metaphor isn't painting, though, it's molding clay. That first pass is slop, but it's raw clay that the agent is very good at molding given a modicum of direction and "not this, do that" comments. The combined first-pass and reshaping time is still far less than writing by hand from scratch. And increasingly, that first pass is ... not bad?
Not all code is fixable. Sometimes the best thing to do with code is to throw it away.
Without any human code to grab on to, AI has a habit of writing code that is pervasively low quality and rife with misunderstandings such that it always needs to be thrown out.
And yes with considerable prompting effort you can improve this picture. But it's easier, faster and cheaper to just write the code yourself. Code is the best specification language we have.
Our experience is very similar except we didn't really have a review process before, and now LLMs find bugs before PRs get merged in main.
We had 5x-100x speedups in some legacy but important pipelines, with no regressions (validated after extensively by humans).
It's not that the code was actively bad. It's just only 1-5% people in the local SWE market would be able to write code that runs so fast and efficient and benchmark it correctly.
We found a subtle correctness bug that was in production for half of the decade (both GPT-5 and Claude Opus were able to find it), confirmed by human after.
And we keep finding subtle bugs that have been introduced by humans before (despite the human reviews, the particular domain is just difficult no matter how many docs and comments and tests one writes)
I am convinced human reviews are overhyped in the industry. We've done it in my company since we started it, and bugs keep happening. People are just terrible at spotting them in the middle of 100 lines of correct code.
Machines, OTOH, are very good at it. I am currently trying to make the code review experience better for humans by not just having the AI review the code, but interact with the human, pointing out potential problems, bad patterns, perhaps hiding some code (e.g. renamings, formatting changes).
Developers still want to review the code, despite provably being bad at spotting bugs, because they want to actually keep knowledge of what's being modified in the code base, so I think this is the best approach.
Maybe the humans are just overwhelmed by the amount of poorly readable AI code you're throwing at them? Maybe they'd be better at reviewing if the code was written by somebody who had put thought into the code instead?
Like we had done for the 10 years prior? Don’t think so. BTW the ai code is as readable as the human’s. Never had to call out people on the AI code being unreadable.
I have not had the same experience. In the PRs I have read, AI accomplishes in 300 very verbose lines what a competent human could in like 60, quintupling cognitive load to review.
But that's so easy to fix! I can't believe people complain about stuff like this. The person making the PR could've told the AI "hey you can easily reduce the amount of code by using this technique" or whatever... or saying you prefer concise code than verbose code in the system prompt so the AI behaves how you want it to... If you can't do any of that, and I can guarantee that if you did you wouldn't have this problem, then you're not really trying.
I’m at a FAANG and we have $300/day token quota. Personally I don’t use that much of it but management is pushing really hard for it. “the quota has been raised for a reason, use it”. Any task: “have you tried working on it with Claude?”. Every meeting “now engineer x and y will show you what he did with AI”.
It’s not all useless but most of the days I think I would be more productive if some processes were streamlined rather than if I had to throw tokens at them and still fail.
Of all the showcases I’ve seen the best are the ones written by people assuming that the token bonanza will not last so they used AI to build tools they wished they had. AI used to build the tool but by no means used by the tool, so if/when token quota gets reduced we still have a functional tool.
I use $30 a day to produce a decent amount of code. Certainly more than we need - thinking about/designing the correct solution/distilling requirements is still the bottleneck. How can you possibly even review $300/day worth of output?
I'm just waiting for my current company to have a Sev 1 CritSit so I can document the bejesus out of the root cause and expose our non-technical AI evangelist leadership as the sort of goons most of the senior development staff already suspect.
Only by walking us into some revenue or customer impacting failure - through inappropriately having junior devs doing senior level things - will some sense of sanity start to prevail again.
Maybe this is what will turn software engineering into an Engineering field.
Right know, prompters are setting up whole company infrastructure. I personally know one. He migrated the companies database to a newer Postgres version. He was successful in the end, but I was gnawing my teeth when he described every step of the process.
It sounded like "And then, I poured gasoline on the servers while smoking a cigarette. But don't worry, I found a fire extinguisher in the basement. The gauge says it's empty, but I can still hear some liquid when I shake it..."
If he leaves the company, they will need an even more confident prompter to maintain their DB infrastructure.
As a junior dev there is this pressure to produce code, add features, and investigate bugs within unprecedented time period. I know whole code base is fking up but i will still add that feature or do a sloppy bug fix without digging deeper.
In my experience, AI really lowered the bar for bad code in the name of delivering faster.
I have seen people write highly complex code where all the complexity was not necessary. Think: deep unnecessary branching, pointless error handling and retries which make no sense in our context, hand-coded parsing using regexps, haphazard data flow, functions which seem purely computational but slyly make API calls, pointlessly nullable model fields, verbose doc comments which describe the implementation instead of the contract. I could go on.
The worst part is, even when "prompted" by bad coders, it works in the end. Even has tests (ostensibly mock-ridden, a pet peeve of mine which always falls on deaf ears). So I cannot reject the PR without being an asshole.
I am no luddite. I make heavy use of AI, with all the skills / AGENTS.md / style guides and clear specs, then review every line of code, prefer testing with minimal mocking. I'd even say with right prompting, it can write better low level code than me (eg: anticipating common error conditions).
But my biggest fear about AI is how it enables normies with little to no understanding of CS principles to produce code faster which looks correct but slowly poisons the codebase.
I have a friend, smart guy, who is writing web services and “connecting them together” for a large firm; he has absolutely no programming experience.
Talking to him, he told me he couldn’t even reverse a string. He is at once many times more valuable than ever before to his company, but also far more dangerous than ever before.
This is what fascinates me. I have a friend, also a smart guy, who has made it to the point he’s at by being a kind of solutions expert. He’s an IT guy, basically. He’s very technical but has never claimed to be a software engineer. He’s writing software with Claude now. The other day he sent me a screenshot of some other team at his work asking him to shut off something he made that was brutalizing an API of theirs. I asked him if he had ever heard of a 429 or exponential back offs. He said no. How do you meta-prompt for that without knowledge?
When I read the discussions about AI making code worse I keep bringing the same argument: people made bad code even before AI. Average coder is barely functioning and that's a fact.
And we were safe from them because they couldn’t produce a mountain of code every day. But soon many places will be buried under a planet of unmaintainable code. It’s adding friction and operational cost and often not adding value.
> Maybe this is what will turn software engineering into an Engineering field.
Oh man, I think you may have touched the third rail here.
My first job out of high school was as an AutoCAD/network admin at a large Civil & Structural firm. I later got further into tech, but after my initial experience with real Engineering, "software engineering" always made my eyes roll. Without real enforced standards, without consequences, it's been vibe engineering the whole time.
In Civil, Structural, and many other fields, Engineers have a path to Professional Engineer. That PE stamp means that you suffer actual legal consequences if you are found guilty of gross negligence in your field. This is why Engineering firms are a collective of actual Professional Engineer partners, and not your average corporate structure.
The issue is that in software dev, we move fast, SOC2 is screenshot theater, and actual Engineering would slow things way down. But, now that coding is fast, maybe you are correct! Maybe vibe coding is the forcing function for actual Software Engineering!
___
edit: I just searched to see if my comment was correct, and it turns out that Software PE was attempted! It was discontinued due to low participation.
> NCEES will discontinue the Principles and Practice of Engineering (PE) Software Engineering exam after the April 2019 exam administration. Since the original offering in 2013, the exam has been administered five times, with a total population of 81 candidates.
Note that other types of engineering are also often vibes based. The mechanical engineering for a rocket engine is extremely rigorous but the engineering for an injection molded housing for a cheap cell phone is a lot more about following a few heuristics and getting it out the door. Even in robotics where I work, it’s mostly about making parts that pass whatever acceptance tests you come up with. In civil engineering and aerospace failure costs human lives and millions or billions of dollars. In robotics maybe you have some machines fail in the field but in many instances you have one overarching safety system and many of the parts are irrelevant to that. The camera housing for example. So no paper trail or mathematical design validation is required to prove you designed it right. Often those are desirable but if you just manufacture it and test it a lot you’re probably fine.
This was something I noticed in my early career in mechanical engineering and later doing PCB design and software for robotics. It’s easy to find firms that just need adequate parts without the professional certifications or ass-covering calculations of other engineering fields.
All this to say, it’s not just software versus the rest of them. From my position, civil and aerospace seemed more like the exception while much of the rest of the engineering world is more vibes based.
> Maybe this is what will turn software engineering into an Engineering field
I think it’ll be the opposite. Maybe it’ll be what will eventually cement the field as “talent” based field. Just like it was difficult to quantify what makes a flute player better than another, how good your are at endlessly prompting a blackbox machine would be the only measure. The engineers of ol’ whoe developed kernels and drivers would be thought of as the “crazy people who put the flute against their temple to tune it” LOL. we don’t need people like that. You can just buy a flute tuning device. who gives a fuck? Can you make the next “Shake it, Shake it”?
I work at software in a medical setting. We are piloting an integration with a startup for measuring [some bodily variable relevant in ICU setting]. They are obviously vibecoding (docs are telling) and their API is failing in unexpected ways that they are not able to resolve. I am just waiting when this are going to harm somebody.
Now imagine if you’re one step removed. You don’t see the cigarettes, smell the gasoline, nor see the fire extinguisher gauge. You only see the servers running business-as-usual. Those “engineering” guys are always drama queens, you think. We have processes and fire extinguishers when shit hits the fan, right?
That’s basically every M2, and many if not most M1s, in the last 10 years. So fuck it. Why does any of it matters?
I feel in a really weird position where I both really dislike what AI is doing to the experience and practice of writing code, to the point where I want a job doing literally anything else besides using the computer, but also think that these tools are extremely powerful and only getting better.
I think Mitchell's point is well taken -- it's possible for these tools to introduce rotten foundations that will only be found out later when the whole structure collapsed. I don't want to be in the position of being on the hook when that happens and not having the deep understanding of the code base that I used to.
But humans have introduced subtle yet catastrophic bugs into code forever too... A lot of this feels like an open empirical question. Will we see many systems collapse in horrifying ways that they uniquely didn't before? Maybe some, but will we also not learn that we need to shift more to specification and validation? Idk, it just seems to me like this style of building systems is inevitable even as there may be some bumps along the way.
I feel like many in the anti camp have their own kind of reactionary psychosis. I want nothing to do with AI but I also can't deny my experience of using these tools. I wish there were more venues for this kind of realist but negative discussion of AI. Mitchell is a great dev for this reason.
I've never had more fun coding, but the key is actually still writing the code yourself. The LLM has terrible judgment but an encyclopedic knowledge and the ability to pick out important details in a sea of information. Their worse use is producing code, but somehow that gets all the energy. Being an LLM babysitter is energy draining and you feel less and less in control. No job is worth being miserable doing something that you used to enjoy.
> But humans have introduced subtle yet catastrophic bugs into code forever
So now the AIs will do more of that, at superhuman speed.
> will we also not learn that we need to shift more to specification and validation
We'll just quickly learn what we've been trying to do for decades, while also treading water in floods of more code than has ever been written before? And some of the motivations to write correct code are being deflated - "just vibecode it again and see if the bugs disappear, it only took a week and $200."
Recently I had a request come through to allow finance analysts to vibe code their apps. During a discussion one of the finance managers let the cat out of the bag. Turns out our CFO had met fellow CFOs at a get together. They talked about how each of them were using AI. Our CFO was lagging behind and felt that we need to "accelerate" our usage of AI. He wants to push it just because he lost a bragging contest.
I don't know why you think a "real" industry would work in the most idealized way. The media heavily reports on the stupid insane crap of the tech industry, that doesn't mean every other industry is sane they're just not as vocal on Twitter.
I call this Dinner Driven Development. That feeling of being Patrick Bateman when everyone is sharing their calling cards must be every C-suite's nightmare.
I think AI rescue consulting is going to be come a significant mode of high value consulting, similar to specialists who come in to try and deal with a security breach or do data recovery.
Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable. It will become a special kind of process to clean room out such a mess and rebuild it fresh (probably still with AI) after distilling out core design principles to avoid catastrophic breakdown.
Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first, place but it will take us 20 years to learn them, just like original software eng took a lot longer than expected to reach a stable set of design principles (and people still argue about them!).
A non-technical friend of mine has just won some hospital contracts after vibecoding w/ Claude an inventory management solution for them. They gave him access to IT dept servers and he called me extremely lost on how to deploy (cant connect Claude to them) and also frustrated because the app has some sort of interesting data/state issues.
As a SWE that has only ever worked for an employer or on his own projects, this makes me wonder: how would someone even get such a contract? Did this person already have a consulting business? Do you just call up random hospitals and ask if you can demo an inventory management system for them? Did this person already know people at the hospital? I know technical folks that do independent consulting, but even with a vibecoded product, how is it that anyone can just get such a contract?
People really have a misconception about the sums of money that companies operate on on a regular basis. If you are a people person and know essentially how to sell yourself, you can "scrape" money on the fact that nobody is going to look or think too hard about some contract that represents a tiny fraction of the years budget.
What concerns me about this is that as these stories multiply and circulate people will just completely stop buying software/SAAS from startups, because 90% or more will be this same thing. It will completely kill the market.
Those are custom software or heavily customized implementations of ERP and similar systems for very large organizations. I’m talking more about the SMB market where today it’s possible for a small team to carve out a niche and make a nice living or even bootstrap a venture that competes with a large player that has poor UX or antiquated feature designs.
The reason Oracle can continue failing at those massive projects is simple: everyone fails at them routinely and often it’s the customers fault.
I used to gripe about various ERP companies but after having dealt with enough, yeah, that's just what the world of ERP systems is like. You will spend your time even with the best of them desiring to scream endlessly at everyone who works there. And they also know your pain but are powerless to help.
But the Torment Nexus is such an interesting technical challenge! and I don’t personally torment people: I just move protobufs around! - Software Engineer #1 and #2 excuses
> On January 3, 2022, the jury found Holmes guilty on four of the seven counts related to defrauding investors: three counts of wire fraud, and one of conspiracy to commit wire fraud. She was found not guilty on four counts related to defrauding patients
Or you end up with a certification process, which will of course introduce it's own problems but startups doing things the right way and not just "moveing fast and breaking things" can thrive.
This hospital will learn some hard lessons. I hope their backup strategy is good. I'm surprised they can field software from an entity that isn't SOC2 & HIPAA certified.
No worries! At worst, the contractor can just tell Claude to make sure the hospital knows they're appropriately certified. And the hospital can use Claude to make sure the certs are valid. Everybody wins, except the ones who end up dead. Or with their health destroyed.
As a cybersecurity IR professional as much as I hate to see this happen to a hospital this kind of thing is responsible for essentially tripling my income over the last 3 years.
This is going to happen all over. Company I'm currently contracting with has gone AI everything (aka technical debt hell), and they're gonna suffer for it. I'm glad my consulting contract ends in 2 months. I don't want to be around for the crash
I'd really like to know how he won contracts, just in general. Did he have some connections. And he doesn't even know how to get it to run on a server by himself? There's millions of people that can do that, if he can win contracts why worry about vibe coding at all, just hire someone to do it. Winning contracts is the challenge in my view.
I work at a university and we still have some workstations that need IE as well, for a healthcare vendor app that needs ActiveX. Up until recently we even had some machines running Windows 7.
Heh. Got a customer recently around this. Entire infrastructure and CI/CD vibecoded. They half implemented Kubernetes in Github Actions that were several thousand lines long and impossible to understand.
I think the problem will get worst. I dislike the marketing around AI, but I do think it is a useful tool to help those who have experience move faster. If you are not an expert, AI seems to create a complex solution to whatever it is you were trying to do.
> If you are not an expert, AI seems to create a complex solution to whatever it is you were trying to do.
I've been watching non-developers vibe code stuff, and the general failure mode seems to be ignorance of 3-pick-2 tradeoffs.
They'll spam "make it more reliable" or some such, and AI will best-effort add more intermediary redis caches or similar patterns.
But because the vibe coders don't actually know what a redis cache is or how it works, they'll never make the architectural trade-offs to truly fix things.
I’ve noticed something similar with vibecoded game rendering logic submitted by peers. Sometimes it will be peppered with extraneous checks for nullptr, or early returns on textures that have zero size.
I often wonder if it’s the statistical nature of the LLM mixed with a request in the prompt.
AI LOVES defensive coding. I asked you for code to filter and reduce an array. I didn't ask you for a method that makes sure the array exists and is an array before it does anything else.
Shelve it with the Jurassic Park version where John Hammond builds a safe, profitable theme park, and The Andromeda Strain that gives people the sniffles.
I don't understand this point of view at all. There's a symmetry that is going entirely unappreciated by most of the comments in the thread: just as I can give Claude X,000 words of text to use to describe the code I want it to write, I can also give it some existing code and ask for X,000 words of text explaining what it does. (Call it, oh, I don't know, a "spec," maybe.)
The explanation, in turn, can be fed back to recreate the functionality of the original code.
At that point, why care about the code at all? If it works, it works. If it doesn't, tell the model to fix it. You did ask for tests, right?
That is where we're indisputably headed. It's not quite a lossless loop yet, but those who say it won't or can't happen bear a heavy burden of proof.
Code is not spec. There is an implementation spectrum.
On one end, you have code that can perform only the behaviour explicitly declared in the spec, but has to be thrown away and rewritten for any new or updated spec.
On the other end, you have code that implements or anticipates a wide range of future possible specs including the given one.
The AI can operate on any point on this spectrum, but it's not very good at choosing. The more complex the software, the more such choices need to be made.
When the number of bad choices reaches a certain critical mass, even a skilled engineer becomes powerless to undo all the bad choices, and even a powerful model becomes unable to reduce it back to a coherent spec.
Some people are mindful about what they get and don't get from amazon and don't die from prosperity. ("you might use AI to increase your prosperity")
the rest of the world eats too much and dies of heart disease/diabetes. ("the rest of the world will flounder more and AI will do more stuff to them than for them")
I've already done a handful of these gigs for early vibecoded products that had collapsed in on themselves. The scope of work was to stabilize the product and only make existing features work.
The issues have all been structural, not local. It's easier to treat it like a rewrite using the original as a super detailed product spec. Working on the existing codebase works, but you have to aggressively modularize everything anyway to untangle it rather than attack it from the top down.
All of these projects have gone well, but I haven't run into a case where a feature they thought was implemented isn't possible. That will happen eventually.
It's honestly good, quick work as a contractor. But I do hope they invest in building expertise from that point rather than treating it like a stable base to continue vibecoding on.
I've worked with many people over the years. A bunch of product people have struck out to make their own thing now that they can get a feedback loop going. I just keep in touch with people. They know my services are available, so if they have a need they reach out.
The greatest asset in this type of work is genuinely liking people, being good at what you do, and keeping in touch. My email is easily findable for a reason.
This might not pan out to be the glorious victory of human craft as you’re imagining it to be.
Here’s a slightly different future - these AI rescue consultants are bots too, just trained for this purpose.
Plausible?
I have already experienced claude 4.7 handle pretty complex refactors without issues. Scale and correctness aren’t even 1% of the issue it was last year. You just have to get the high level design right, or explicitly ask it critique your design before building it.
This. I have this buddy, who is not an idiot by stretch of the imagination and more adventurous than me in some ways ( I don't really run agents on my machine ), but when I was looking at his prompts, I sometimes question how he gets anything done at all. It is vague and angry demands.
With GPT 5.4 or 5.5 I did not notice degradation in performance when it was working on a large 5k line file containing a WebView, JS scripts, as well as native UI.
I instructed it to split it up anyway, yet I wonder how often the concerns around the mess are imaginative rather than practical.
> Maybe in the future but certainly no evidence of this anytime soon
Here's some anecdotal evidence from me - I cleaned up multiple GPT 4.x era vibecoded projects recently with the latest claude model and integrated one of those into a fairly large open source codebase.
This is something AI completely failed at last year.
Maybe you should try something like this or listen to success stories before claiming 'certainly no evidence' in future?
There are untold billions of dollars to be had if you can make this future come to pass. You don't need AGI to make it happen either. You just need to keep making the context windows bigger and keep coming up with updated training data. It's not the outcome I want, but it really does feel within reach. The only limiting factor is going to be token count and cost to process/generate those tokens. But if you don't particularly care about quality, costs are going to have to go up by several orders of magnitude before you start to regret firing your software engineers.
I don't know what happens in a decade when there are no junior engineers, skilled senior engineers are becoming rare, and the only data left the train LLMs on is 200th-generation slop. But AI slop being qualitatively slop is not enough of a obstacle to prevent that future from coming to pass. And billions of dollars will be "saved" along the way.
I have personally had success telling Claude that some AI-written system is too complicated and ask it to rewrite it in a more logical way. This sometimes results in thousands of lines of code being deleted. I give an instruction like that if I see certain red flags, eg:
1) same business logic implemented in two different places, with extra code to sync between them
2) fixing apparently simple bugs results in lots of new code being written
It’s a sign I need to at least temporarily dedicate more effort to overseeing work in that area.
I somewhat agree with the AI psychosis framing of the OP. It takes some taste and discipline to avoid letting things dissolve into complete slop.
I'm no expert, but the skeptic's opinion I've heard would be to ask:
What evidence is there that we're not at or close to a plateau of what LLMs are capable of? How do you know the growth rate from 2023 to present will continue into 2029? eg. Is it more training data? More GPUs? What if we're kind of reaching the limits of those things already?
I think we're close to the plateau of what LLMs can do, but they will keep improving. IMHO the results are already showing diminishing returns.
The (leading) LLMs work by consensus, like Wikipedia, Openstreetmap, web search engine or opensource movement.
What I mean is if I ask LLM "create a linked list", its understanding (of what I want) is already close to the expected ideal. Just like Wikipedia article on linked list, for example.
But the LLMs will continue to improve in breath and depth of understanding the world, although technically (what they CAN do) they probably already peaked. Similarly, OSS movement technically peaked in the 90s with the creation of compiler, operating system and a database; doesn't mean that new opensource isn't being created.
There is so much money at stake, and so much money pouring into AI development, that I think we are going to continue to see gains for a while. People keep coming up with new agent harness techniques like chain of thought, tool calling, and memories. And then the big LLM companies figure out how to actually train their models to optimize the use of those techniques. To claim that we are reaching the top of the plateau is to claim that we are out of effective ideas for improvement. I think that's a ridiculous claim, the technology is too new. And because of the strong incentives to keep making these things better, it's pretty much a given that people will continue to explore ideas until we really are out of effective ideas. I don't think anyone apart from professional AI researchers have any idea where this is all going to settle.
Well depends what you mean by peak. I was answering parent's question of what LLM's CAN do. It's not about peak of technology or humanity itself.
LLMs (or specifically GPT algorithm) are 8 years old. It has matured as a technology. I am not sure how you imagine it being significantly improved, from a user point of view, without some kind of paradigm shift (i.e. something significantly different from GPT or LLM).
Although I can imagine one important social innovation yet to come - a generally available big public LLM, that "anybody can train". We had a technology of "encyclopedia" for years (famously Brittanica); yet the concept of Wikipedia has been a truly new take on encyclopedia.
Also, new kinds of AI might emerge - for example we might formalize all types of human reasoning and build a reasoning AI, as well a model of human language, from scratch rather by training via GPT (and thus, more understandable and potentially smaller). But that won't be an LLM.
> I am not sure how you imagine it being significantly improved, from a user point of view, without some kind of paradigm shift
I proposed how. New harness techniques and new training data/techniques, so the harness gets better and the LLM can be trained to work better with the harness. There's no reason to believe we're out of momentum for improvement in that direction.
Yeah but what do you mean by (substantially) better in this context, what is the outcome? Modern models can understand the requirements as well as humans can.
However, they also make mistakes like humans, I don't think a better harness or better training will fix that, because fundamentally, they cannot read your mind, if you put in an ambiguous prompt.
I like to compare the process of turning inexact text to formal language to an error-correcting code. If you haven't made too much mistakes or have been precise in the specification, it will self-correct and do what you want. But if your input is too ambiguous, it will never do exactly what you want, but something close to it. And people (who are using AI) are still learning where is the boundary and how to tell.
The companies building these models are training them to react to typical expectations. If you have some special need, you will always have to tell the model, otherwise it will not know your exact context. And the harnesses have many tools for that or try to do that automatically already.
Ultimately, you are describing a fundamental problem with induction -- Hume's problem of induction to be specific. How can we know that anything that has been shown empirically in the past will continue to be true - we can't. Best to investigate mechanistically:
I don't see why we would assume that we are at a plateau for RL. In many other settings, Go for instance, RL continues to scale until you reach compute limits. Some things are more easily RL'd than others, but ultimately this largely unlocks data. We are not yet compute/energy/physical world constrained. I think you would start observing clear changes in the world around you before that becomes a true bottleneck. Regardless, currently the vast majority of compute is used for inference not training so the compute overhang is large.
Assuming that we plateau at {insert current moment} seems wishful and I've already had this conversation any number of times on this exact forum at every level of capability [3.5, 4, o1, o3, 4.6/5.5, mythos] from Nov 2022 onwards.
Since we're not experts, we treat it as a black box. What are the results? Is the quality of the results improving? Is the improvement accelerating or decelerating?
And the answer appears to be that the improvement is accelerating. So how could it be stopping?
I don’t think improvement is accelerating. We went from “computers can’t do these things at all” to “now they can” in a few years with the discovery of transformers, and now we get “it can do the same things, except incrementally better, at a drastically higher cost” every few months.
I don’t think that the current AI paradigm has infinite headroom for improvement, similar to how every other AI approach before it eventually hit a limit.
Incrementally, higher cost? A model I'm running on a 10 year old entry level computer is better at programming than GPT4. Those are multiple orders of magnitude of improvement in a few years.
And the link I posted shows the amount of work a query can do increasing non linearly. You can explore the site for more detail and a graph that shows error rates getting halved every couple of months.
No one said anything about infinite. It doesn't mean we don't have headroom to spare.
Software itself took 80-120 years to get where it is today depending on how you count. Time is on AIs side here.
Are you sure about this? Yes, there is a stable set, but they are used in all of the wrong places, particularly in places where they don't belong because juniors and now AIs can recite them and want to use them everywhere. That's not even discussing whether the stable set itself is correct or not - it's dubious at this point.
What you're describing really isn't a new problem for organizations. Historically it's been a team of humans not using AI who gets over their skis and they have to have other more capable humans (also not using AI) to bail them out.
That sounds so horrible, though. It's akin to people working as COBOL devs because someone has to do it, so they'll get the big bucks. Except I've never heard of anyone who actually likes COBOL and the more I've learned about how mainframe development actually works, the more horrified I've become haha. Dealing with an LLM spaghetti codebase sounds like hell.
> Purely AI written systems will scale to a point of complexity that no human can ever understand
But won’t those more complex systems presumably solve more complex problems than the systems that humans could build? Or within a comparable time?
I think it is reasonably safe to assume at this point in the game that these AI systems are increasingly able to reason rigorously about novel problems presented to them, of ever increasing complexity and sophistication.
I'm with you on this one, having "vibe coded" some smaller internal tools on GPT 5, and then re-vibed it on Opus 4.6 and 5.5 -- they basically just fixed all of the problems without me doing much of anything other than prompting it to look at the existing code and make it "better".
Pretty much. We're intensely vibe coding something that has gone through so many requirement changes. The code has become very gnarly. I took a stab at basically one prompt rewrite of the whole thing. And it wasn't there, but it was 80% of the way there. and a hell of a lot cleaner.
Those design principles it will take us 20 years to learn are just the principles for writing good maintainable, debug-able, understandable code today. Will just take 20 years to figure out they still apply when AI writes the code, too.
The complexity you would come to the rescue to solve, would that be from AI or from the style of programming you let the AI have? I mean, you have very different problems if you use functional style vs object-oriented. It is up to the programmer to realize they want a functional style and request that from the AI, as much as possible. Even AI cannot imagine every state transition, unless it is so smart that it should be the one telling you what to do.
> I think AI rescue consulting is going to be come a significant mode of high value consulting
I thought the same when I saw development outsourced to Indians that struggled to write a for loop.
I was wrong.
It turns out that customers will keep doubling down on mistakes until they’re out of funds, and then they’ll hire the cheapest consultants they can find to fix the mess with whatever spare change they can find under the couch cushions.
Source: being called in with a one week time budget to fix a mess built up over years and millions of dollars.
My company and my buddy's company, we're experiencing the same thing. We are trying to fire a SAAS vendor and it's become the hot new project. Now we to these meetings with 50 different people that are allegedly stakeholders, two or three product managers who have already vibcoded their version of something.
Ultimately, if you want to move fast, it's better just to have one engineer vibe coding something. but, that engineer is under so much pressure. Now he's got a legacy mode and another legacy mode because the requirements keep changing. And now there's a deadline in four weeks.
This all could work just fine, but the ungodly amount of attention that this world is getting puts too many cooks in the kitchen, which is always a recipe for disaster.
Someone responded to a previous comment of mine [0] positing a Peter principle [1] of slopcoding — it will always be easier to tack on a new feature than to understand a whole system and clean it up. The equilibrium will remain at the point of near, but not total, codebase incomprehensibility.
I really am surprised that people on a heavy CS themed forum still have trouble grasping this.
Imagine the year is 1995, C exists, but some guy out there is working on essentially what modern Python is. He says to you "check out this language, you can just import stuff, and use it and dynamically modify anything at run time". You can probably come up with hundreds of arguments about things that could go wrong, like memory clean up, threading, e.t.c, but turns out, incrementally, they were all solved and we have the modern Python that basically is good enough to build these large LLM models.
Now imagine modern programming and computing is what C was back in 1995, and AI use is that guy building the Python code.
You can imagine anything you want, but it’s not an argument - you could apply this to anything. “Python was successful after a dubious beginning so NFTs will be successful”
Also, Python does not build or run large language models. It orchestrates C code that does that, and it was probably good enough to do that in 1998.
Yes. And as the models get better, it works better. But at one point you do have to understand the code because it's also just guessing as to what your actual intentions are.
It doesn't know what mess you want to clean up. A lot of times AI just starts making up new patterns on top of other patterns and having backwards compatibility between the two. How does it know which one you actually like?
Frankly this is what everyone is counting on whether they know it or not. The question though is not “will the models get good enough?”. The question is does the repo even contain enough accurate information content to determine what the system is even supposed to be doing.
People are often skeptical when I say this, but there's simply no guarantee that it's possible in principle to clean up a bad architecture. If your system is "overfitted" to 10,000 requirements from 1,000 customers, it may be impossible to satisfy requirements 10,001 through 10,100 without starting over from scratch.
It's really not that big of a word. The CAP theorem shows that as few as three reasonable-sounding requirements with no obvious conflicts can be impossible to satisfy simultaneously. (User needs will start more flexible than strict mathematical requirements, of course, but once people start to build production workloads on top of your systems that flexibility is radically reduced.)
> Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable.
Wow, it’s true, AI really is set to match human performance on large, complex software systems! ;)
Humans who have been writing systems like that for many years know how to maintain and modify them successfully. It’s just that our industry has a bias towards youth who don’t think they have anything to learn from those who came before them.
How do you explain to a junior this pile of messy code isn’t crap but is actually years of integrated knowledge ? That the most common principles discussed in computer science (OOP, SOLID, DRY etc.) are actually just little guides that aren’t to be taken to the extremes ?
A decade ago, I was sitting in on a meeting about a rewrite and, before I could say anything, someone in the first year of her career asked why anyone thought a rewrite would be any cleaner once all the edge cases were handled. Afterwards, I asked her where she learned this. She said "I don't know, it just seems kind of obvious." She went on to be a great engineer and is now a great manager.
I work on internal facing software and every rewrite I've seen in 20 years suffers from the same symptoms. The code/system is a mess because it has been exposed to reality for a decade. Reality is messy. That's why they pay us money, believe it or not.
Greenfield guy comes in, promises the world, and starts from some first principles white papered architecture. It's really lovely until they onboard the first user. Then they slowly commit all the "sins" (features that drive revenue) of the first system.
The firm is stuck supporting N systems indefinitely because the perfect new system takes so long to cover even 30% of the original system use cases, that management takes a flier on.. bear with me.. a second rewrite. Now they have 3 systems.
I've seen more 3rd systems than I've seen actual decommissioning of original systems into a single clean new system.
The answer is chipping away, modularizing, and replacing piecemeal Ship of Theseus style. But that does not drive big hires and big promotions.
Yeah... in my experience people who code like that 'successfully' make modifications that fix an immediate problem while kicking another bug or two further down the road in a never-ending sunk-cost-fallacy of job security...
My team lead has worked on the same software for 30 years. He has the ability to hear me discuss a bug I noticed, and then pinpoint not only the likely culprit, but the exact function that's causing it.
I do the same thing in a project I’ve worked on for 25 years. I’ve had mediocre at best results with AI. It’s useful to discuss concepts with, but the code never handles the nuances of the edge cases.
Yep this is like comparing master craftsmanship with a production line. You're gonna get good attention to detail and a masterpiece from one, and a limited thing that will break after few years from the other. But for majority of use cases the second one is enough. And pointing out the master craftsmanship is "better" is besides the point.
And with one you need to train a guy for 25 years and with the other you need plan mode for a few minutes and then it runs 24/7.
Do we? We have many buildings built and very little master masons or whatever nowadays. The amount of craftsmen needed to build a 10 story building is very limited. That's what we should aim for software, much less experts needed for the same outcome so more people can benefit from software.
I want the people building the buildings I live, work and shop in to know what they’re doing so those buildings don’t fall down or let in the wind and rain or require too much maintenance.
And the equivalent for software. It’s usable, intuitive, responsive, stats up and running, and doesn’t leak my private data.
No house I ever lived in was ever made by experts. The apartment building I grew up in was all built by minimum wage guys that may or not even speak the language of the building overseer and had zero specific training or certifications. Some architect somewhere did the plans for a standard building, which the developer purchased and just used.
Then the only "experts" (not even close, just a guy with a form and some technical training) are the building inspectors who come at the end to verify if some stuff is done up to code.
Other than the original architect who draw the plans that got used for many buildings and the electrical engineer that cleared the electrical, no experts were involved. This is basically how the whole city and most of the country was built.
There's no expert mason or painter or whatever involved. Just a dude that can hold a paint roller. That's the same as going from a craftsman programmer to some dude with claude. Individual quality goes down, but more importantly price goes down way more and so many more people get access to much better quality than having nothing.
there is a large incentive for computer programmers to build themselves up in importance. higher wages, better love lives, more status. but most software is pretty mundane and straight forward, or at least should be. fancy architectures rarely pay off and the best solutions are sometimes the most obvious. although i could be suffering from that phenomenon that people in maths have where they struggle to understand then once they grasp it they feel dumb like "ofc i should have known that!"
I have really tried as an "old" person in the field to try and pass on the stuff I've learned, but "craft" and such really has absolutely no home in modern dev culture. The people who care about history, the craft, etc. are increasingly rare.
The origin of 'dark DNA' begins to make more sense through this sort of lens, except the system somehow maintained a level of compensation to fix all its flaws.
is this true because training companies have not been training AI for both performance and brevity (or some other metric like that)? If this becomes a much more serious issue surely they would adjust the training processes
> Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first...
It's really nowhere near as complicated as making distributed systems reliable. It's really quite simple: read a fucking book.
Well, actually read a lot of books. And write a lot of software. And read a lot of software. And do your goddamn job, engineer. Be honest about what you know, what you know you don't know, and what you urgently need to find out next.
There is no magic. Hard work is hard. If you don't like it get the fuck out of this profession and find a different one to ruin.
We all need to get a hell of a lot more hostile and unwelcoming towards these lazy assholes.
It's kind of like producing code is becoming more like farming.
We didn't create the dna we rely on to produce food and lumber, we just set up the conditions and hope the process produces something we want instead of deleting all the bannannas.
Farming is a fine an honorable and valuable function for society, but I have no interest in being a farmer. I build things, I don't plant seeds and pray to the gods and hope they grow into something I want.
Prayers are for weather. Pretty much all farmed plant, animal, and fungus species have been selectively bred or genetically modified. Farmers know what's going to grow.
Farming has merely a lot of study and input into the process, very little actual control and no determinism at all. We know how to improve chances is all. The fact that we breed and "engineer" is like a drop in the bucket.
You might grow corn, or you might grow defective unusable corn and/or any number of other things like locusts or fungi or other plants that decide to grow in the place where you planted corn. Sure, the corn seeds will not produce ball bearings. Genius observation. There are about an infinity of other things that can and do happen besides that.
Planting is merely setting up the conditions. We didn't write the dna, we couldn't write the dna if we wanted to because we are an infinity away from understanding all the actual processes that descend from the dna. And when we utilize the dna that we simply found and didn't and couln't hope to write, it's always, at best, a case of hoping it goes right again this time.
Tell me you've never done any farming without telling me you've never done any farming. There is certainly risk in the business due to market fluctuations, weather, natural disasters, disease, and pests. But the final product is highly deterministic. Almost all genetic variability has been expunged from major food production species in a relentless pursuit of predictable yield. Everything looks and tastes the same. We can debate whether that's a good thing but it is the reality for most farmers.
If it was deterministic, there would be no such thing as blights and other forms of failures. There would be no problem with the bannannas, or coffee or wine grapes. There would be no such thing as a critical few days of the entire year where if anything goes wrong you lose the entire year because it was too humid or too cold or your equipment was out of commission for a week. The bees wouldn't matter at all.
Even when it works, even if you put in a lot of work and experience and understanding, it still just worked by itself and it's just good luck every time.
Interesting perspective. Fundamentally at conflict with the data, science, and 20+ year trends of AI coding systems - to the point of dogmatism. But interesting from a sociological point of view.
yes, I was never so happy to work in Germany. People used to joke about the proverbial fax machine still being a thing but I've never been so glad to work in a culture where this mania doesn't exist. Reading HN is like entering Alice's Wonderland of token maxxers and AI psychotics. Genuinely don't know a single person here who is forced to work like this.
Actually, I have been wondering to which extend the AI craze has reached the DACH region. I don't work for any company and neither do my friends. HN is essentially my only peephole into the world of commercial software development and I'm aware that it's extremely biased towards Big Tech and SV startup culture.
I work at a hosting provider that has pretty conservative customers who don't want to host on AWS/Azure due to data privacy / safety concerns, among other things.
For us, sending customer data to the US is a big no-go.
We have been experimenting with LLM usage, first through a Gemini subscription, then also with the Claude API. Participation has been lightly encouraged by management. As for coding, we haven't let the LLMs loose on our core components, but tooling on the fringes (like deployment scripts, reporting) has seen some uptick in LLM usage.
We have also started building an on-premise inference cluster, which is in alpha testing, and where the "don't include customer data" restriction doesn't apply anymore.
do you mean this aesthetically or quantitatively? Are they actually outcompeting / making more money ? Or do you mean they are now looking more desirable because their competitors are racing to the bottom (though likely making money on the way down)
No offense, but if you think your using AI in the development and design of your site, voxos.ai , gave you a competitive advantage it didn't. I can instantly tell when someone used an LLM to build their whole site and lets just say... Its not a good thing.
It is absolutely going to be a competitive advantage if it isn't already. When your competitors' products suck because they are using LLMs to write them, and yours work because you aren't, customers notice.
Every power user of LLMs thinks that they are the ones that know how to hold it correctly, in reality they usually have major Dunning Kruger and are convinced they're living in some hyper productivity mode when actually they're all just copying each other making low effort slop that all sounds the same, looks the same and does the same things.
I'm going through a mixed experience regarding this, personally.
Management is really pushing AI. It's obnoxious, and their idea on how it fits into my team's job specifically is completely, hilariously detached from reality. On the off chance someone says something reasonable, unless it fits the mold, it's immediately discarded. The mold being "spec driven development". We're not even a product team for crying out loud. I straight up started skipping these meetings for the sake of my sanity. It's mindwash, and it's genuinely dizzying. The other reason I stopped attending is because it ironically makes me more disinterested in AI, which I consider to be against my personal interests on the long run overall.
On the flipside, I love using Claude (in moderation). It keeps pulling off several very nice things, some of which Mitchell touched on in this post (the last one):
- I write scripts and automation from time to time; Claude fleshes them out way better with way more safety features, feature flags, and logging than I'd otherwise have capacity to spend time on
- Claude catches missed refactors and preexisting defects, and does a generally solid pass checking for defects as a whole
- Claude routinely helps with doing things I'd basically never be able to justify spending time on. Yesterday, I one-shotted an entire utility application with a GUI to boot, and it worked first try; I was beyond impressed.
- Claude helped me and a colleague do some partisan cross-team investigation in secret. We're migrating <thing> and we were evaluating <differences>. There was a lot of them. Management was in a limbo, unsure what to do, flip-flopping between bad options. In a desperate moment, I figured, hey, we kinda have a thing now for investigating an inhuman amount of stuff in detail - so I've put together a care package for my colleague with all our code, a bunch of context, a capture of all the input data for the past one week, and all the logs generated. Colleague put his team's side of the story next to it, and with the help of Claude, did some extremely nice cross-functional investigation. Over the course of a few weeks, he was able to confirm like a dozen showstopper bugs, many of which would have been absolutely fiendish if not impossible to fix (or even catch) if we went live without knowing about them. One even culminated in a whole-ass solution re-architecturing. We essentially tore down a silo wall with Claude's help in doing this.
So ultimately, it really is a mixed bag, with some really deep lowpoints and some really nice higlights. I also just generally find it weird that a technical tool [category] is being pushed down people's throats with a technical reasoning, but by management. One would think this goes bottom up, or is at least a lot more exploratory. The frenzy is real.
This will be pushed down from people, who will have no deep understanding of it. But it does check some boxes in an ISO certification.
Well, now you must to work with a confusing tool which slows you down. You are not allowed to use claude directly anymore, because someone heard that mythos is really bad for security. But hey, the tool integrates well with Jira!
You hate every second working with this thing. All the joy you had with explorative coding is forever gone, which was the sole reason you entered this field.
Deep inside you know that you can't change your job, because every other employer will cut its workforce as AI removes all manual labor of a software engineer and reduces risk to a minimum.
Oh, now we can finally move all those jobs to india without risk and shareholders will love it! How awesome is that! Wait, do we still need that guy in cubicle 42, who bitches and moans about AI every day? Nah...
Hard to have sober talk about this since a lot of discourse is AI psychosis vs. AI naysayers. Does software quality seem to have taken a jump in the past few years to anyone? Not to me, seems to be getting worse. Think that's a decent signal. Can tell you I'm dealing with a non-technical VP who loves blast submitting vibe-coded PRs and while there's some quick wins, overall quality is bad, and we had our first real production outage that Claude one-shot caused but could not one-shot solve.
There's an acceleration of current known processes that is being referred to as agent speed (vs human speed). But this is purely a mechanical effect. There don't seem to be augmentive cognitive effects. "AI has invented this revolutionary algorithm/workflow/architecture" is an article title you'd expect to see pop up quick, and often.
You're speaking of my company and I'm forever grateful.
I'm afraid to say this out loud internally because I'm afraid of the next round of layoffs and I want to keep my job. So I just keep on shipping at a high pace, building massive cognitive debt and hoping the agents will get so good in near future, that there won't be the need for understanding the codebase.
> hoping the agents will get so good in near future, that there won't be the need for understanding the codebase
Agents might get better. But who will own the code and take responsibility for it? The AI agent? The company who created the AI agent?
If e.g. a car crashes and does not deploy its airbags because the AI agent made a mistake in the airbag code, will the manufacturer be able to shift the blame to OpenAI or Anthropic?
I do not think so.
And therefore I believe that no matter how good the AI agents will ever become, the ultimate responsibility for the code will always remain with the companies that create the code. Regardless of which AI tools they use.
I see no other way to bear that responsibility by the company than to have people internally who will be responsible. And those people, if they actually want to own that responsibility, would need to understand that code themselves, in my opinion. Because relying on a non-deterministic AI agent's vetting is fundamentally unreliable, in my opinion.
The developers signing off on this will be "Human crumple zones" to protect the company from liability. Be very cautious if asked to sign off on anything like this.
Bug reports also go down when people lose faith that they will be fixed, because reporting them is often a substantial time commitment. You see it happen pretty regularly as trust in a group/company collapses.
The last three times I filed detailed bug reports as a client, all I got back were AI replies asking the same questions I’d already answered in the original report and suggesting alternatives I’d explicitly said I’d already tried. No wonder people don’t write bug reports anymore.
Add this the real possibility that significant part of reports that get filed might be AI generated or rewritten. With high possibility of being misreported because of that. Or have incorrect parts... So attack on multiple sides.
And we do not get even get into potential adversarial tactics. If you have no morals what is better than using agents to flood your competitor with fake bug reports.
Just let AI filter out the fake reports! Then let AI work on the real ones. See, there's really no problem "more AI" can't solve (as long as you're willing to ignore all of the underlying ones). "Pay us to create the problems you'll have to pay us to fix for you" is one hell of a business model. It basically prints money.
I agree, and I'd like to point out that this problem isn't unique to AI driven projects. I think much, if not all, of what Mitchell has been observing can readily happen without AI in the mix.
"Just use autoresearch and it will fix your app's memory leaks in an hour" is what I was nonchalantly told by someone who has never written a line of code ever.
I guess what I relate to the most is how dismissive people get about real software engineering work.
I may have skill issues, but I am yet to reach the level of autonomous engineering people tend to expect out of AI these days.
The AI psychosis is not the anti-opinion to the use of AI.
I use AI coding tools every day, but AI tools have no concept of the future.
The selfish thinking that an engineer has when they think "If this breaks in prod, I won't be able to fix it. And they'll page me at 3AM" we've relied on to build stable systems.
The general laziness of looking for a perfect library on CPAN so that I don't have to do this work (often taking longer to not find a library than writing it by hand).
Have written thousands of lines of code with AI tool which ended up in prod and mostly it feels natural, because since 2017 I've been telling people to write code instead of typing it all on my own & setting up pitfalls to catch bad code in testing.
But one thing it doesn't do is "write less code"[1].
> I use AI coding tools every day, but AI tools have no concept of the future.
The selfish thinking that an engineer has when they think "If this breaks in prod, I won't be able to fix it. And they'll page me at 3AM" we've relied on to build stable systems.
Maybe it's just my prompt or something but my coding agent (Opus 4.7 based) says things like "this is the kind of thing that will blow up at 2am six months from now" all the time.
It's really inconsistent though.. it takes shortcuts and leaves todos all the time without really calling it out explicitly, you have to pay close attention.
This reminds me of Rich Hickey’s “Simple Made Easy” and his approach in making Clojure.
Even before LLMs generating entire programs, complex frameworks allowed developers to write the initial versions of programs very quickly, but at the cost of being hard to understand and thus hard to debug or modify.
Some of us are betting that the AIs will always be smart enough to debug, maintain and modify the programs written by AI, no matter how convoluted or complex. I’m not so sure.
> I lived through the great MTBF vs MTTR (mean-time-between-failure vs. mean-time-to-recovery) reckoning of infrastructure during the transition to cloud and cloud automation.
What's the historical context for this MTBF vs. MTTR reckoning?
If you optimize for MTBF, you optimize for it to be a long time between failures. You optimize for the system not going down in the first place, but when it does do down it might be Pretty Bad.
If you optimize for MTTR, you don't care how often you go down and instead optimize your recovery time to be as short as possible.
Not the GP commenter, but I'm still struggling to understand how this relates to the AI world, or perhaps more importantly, what the historical context was. Did people end up switching to MTTR optimization over MTBF optimization? If so, is the implication that the recovery times got lower but software instability went up as a result?
There are concerns that AI might/will make mistakes. Instead of optimizing for producing perfect code, they think that AI can fix bugs as fast as it produces code and are optimizing for MTTR. Sounds like decision made by people who don't write code regularly, as there is this Architectural drift that happens where you are no longer aware of what's happening in your codebase. As a junior guy I so want this to happen.
To give a timely example, think GitHub and what its leadership is thinking/optimizing for. Do you care if you’re down once or twice a week vs how long those down times are? What’s the KPI you’re managing GitHub with?
Current (and by current I mean the last 4-5 years) they only cared about MTTR. That was probably the only metric they measured and cared about. When a system went down it fired an LSI “Live Site Incident” (as opposed to a CRI “Customer Reported Incident”). At the time you grilled your team. Eventually you come to the conclusion that an LSI should only be measured by MTTR. MTBF is meaningless because MTBF limits your “ship new features” velocity.
You might scoff at GitHub and “ship a new feature” concept in the last 5 years, but if you’re an enterprise customer you’d know how much nonesense they shoveled out in the last 5 years. Absolute insanity of “what the fuck” type feature because customer X who is paying $$$ is asking for it type features.
MTBF = optimizing quality (reliability, uptime, correctness) of AI product
MTTR = optimize the ability to correct failures when they occur.
He's describing leaders who believe quality no longer matters because any faults or deviations can be corrected so quickly that it doesn't make any sense to waste time on quality.
Yes that’s very correct. The way I think of it, MTTR is easier to measure and manage as a manager. MTTR is all about “operational excellence”. Basically, when shit hits the fan, how good are we at figuring out what caused it and how to fix it. That’s a muscle that you can train, the script goes:
- What alerts are we missing that could have helped us catch that earlier?
- What dashboards could we have had to help diagnose the issue quicker?
- What Ops tools could we have had to help mitigate such issue quicker?
- What extra logging/metrics/telemetry could we add to help us catch this quicker?
- What “safe deployment practices” could we have employed to avoid/improve this?
- what processes could we enforce to facilitate all of that?
Rinse and repeat that few hundreds or thousands of times while mounting MTTR KPI and you will see that number improve. Most likely through your team “gaming it”
MTBF is much, much, tricker to measure or “manage out”. It’s about “excellence in engineering” which is not measurable nor controllable. You want a random feature X. Your team tells you it’s really not how the system works, and they want few months making the change slowly while observing the system. But you don’t want just X, you want X, Y, Z, W, V, Q, A, B, C, D, all the way throw AAZZW12. So you tell the team to go fuck itself.
Same grifters optimizing for MTTR are now pushing even more reckless use of AI, because “accidents will happen anyway, so we need to prioritize speed”.
Before the cloud, people were trying to reduce the mean time between failure (MTBF) essentially trying to prevent a thing from failing. With cloud, people are trying to recover as quickly as possible (mean time to recovery) accepting that things will fail —- it’s about how fast you can react to it.
There's a lot of people writing bad code. With AI being forced top down (with the promise of turning people into 10x-ers), we're going to get a lot of people writing bad code 10x faster.
I really do worry - I especially worry about security. You thought supply chain security management was an impossible task with NPM? Let me introduce to AI - you can look forward to the days of AI poisoning where AIs will infiltrate, exfiltrate, or just destroy and there's no way of stopping it because you cannot examine the internals of the system.
AI has turbo charged people's lax attitude to security.
Not security, but I ran into a related supply-chain issue recently. I needed a library to perform a moderately complex task, and found one in the ecosystem I was working with that had been around for a while, appeared reputable, and passed my cursory inspection. So I dropped it in, got the feature implemented, and moved on.
Some time down the line, I discover CPU being maxed out, which is showing up in degraded performance in other parts of the system. I investigate, and I trace the issue to a boneheaded busy loop in this library that no human with the domain expertise to implement the library would have written. Turns out I'd missed one deeply-buried mention in the README that maintenance was being done via AI now, and basically the whole library had been rewritten from the ground up from the reliable tool it used to be to a vibecoded imitation.
Yeah, yeah, sure, bad libraries existed before all this. But there used to be signals you picked up on to filter the gold from the dreck. Those signals don't work anymore.
The longer I look at the AI transformation, the more it seems like a people problem than a technology problem. The technology is undeniably there. The people are all over the place.
I am watching a 10 person company try to run 3 different AI initiatives in parallel. Everyone wants to be "the guy" on this one. I cannot imagine there will ever be a bigger opportunity to ego trip as a technology person. This is it. This is the last call before it's all over. There are many businesses out there that are beyond traumatized by human developers taking them on bad rides. The microsecond they think this stuff will work they are going to fire everyone.
The psychosis comes from the tension here. We effectively have The Empire vs the rebel alliance now. I know how the movies go, but in real life I think I'd rather be working on the Death Star than anywhere else.
Company I just left is reportedly now using Claude to analyse the metadata generated from the company MDM that tracks actual laptop use, and then pulling people up if they're not working "enough".
They're also reportedly now giving staff AI-related "homework" in an attempt to force staff to use AI more.
I'd like to chime in and mention that its really obvious how to RL a coding agent to get the human addicted asap. and its also clear that there's a ton of $$$ to be made by doing this. therefore its done. the only LLMs I use are the ones I run locally because i know they aren't RL'ed for that metric (no incentive for the company that made them to make their open weights models addictive)
I think there's a few things, but its a little subjective and its more about the style the ai uses when doing these than the actual specific behavior:
- Nuggesting improvements to the code after finishing the task you gave it, very irritating when the improvements were obvious and the ai didn't implement them on its own
- Not trying very hard when implementing something, leading to bugs, which leads to more tokens used (this behavior can be incentivized and learned with RL)
Since its a known fact if a user continues a session after the LLM says something, its not hard to train against this. The least efficient way to do this would be to GPRO directly against the user base and try to get as many people talking to the AI, and with OAI having a billion monthly active users the least efficient method would work really well for them.
The race to invent variants of Gas Towns, Ralph loops, pump out videos, blogs, etc. showing off greenfield development with cleverly named agents running in parallel is another case of engineering people diving head first into Resume Driven Development.
Sure there are industry changing things going on. What if you're working on an app thats a decade old and has had different teams of people, styles, frameworks (thanks to the JS-framework-a-week Resume Driven Development)? Some markdown docs and a loop of agents isn't going to help when humans have trouble understanding what the app does.
I find talking about X psychosis (or generally using mental illness metaphors) unproductive. It sets up the conversation to be "nothing else to do with this person".
Maybe the problem is you, but you won't figure that out if you think the other person has psychosis.
For example, maybe you need to do a better job explaining, changing your language, simplifying things, being more concrete with consequences.
Or maybe you aren't understanding that the other person has different objectives/ loss function that makes them make seemingly weird conclusions.
I don't entirely know what rational discussions that can not be had?
It seems like he is pointing out that Ai will increase the complexity of a system oblivion, and that this is the discussion that can not be had.
Bit I am more than happy to talk about how I am using Ai to reduce complexity and remove architectural debt that I otherwise could not justify spending time on.
So it sounds like he’s not talking about you. He’s talking about people who actively choose to ignore complexity risk and refuse to have a rational discussion about it because they believe AI will always be able to fix it.
So rewriting gets cheaper and cheaper. New features fall more or less into the same category. Refinement doesn't.
The question is: Will we live in the world of breathless re-implementation, new features every week, rebranding every quarter or will we eventually discover the value of stability, software that does its thing more or less optimally for decades?
Recent examples of things like curl or Firefox are interesting in that regard. Will we end up with a nearly perfect HTTP user agent and stick with it for decades?
The primary issue here is that CEOs and investors are particularly vulnerable to AI psychosis which is then forcibly propagated to the rest of the organization. Understandably, the perceived benefits are almost impossible to ignore, compounded by the FOMO of the AI first/AI native narrative being sold by AI influencers.
Honest comment: it is transition time. This time is to make bets and take positions. Your humble position maybe.
I already took a couple of decisions. It will go wrong or well. But is was decided a year and a bit ago.
If you think the future will be different, stop doing the same you used to do the same way you used to do it.
My analysis is that the labour market will increasingly bargain salaries and will make pressure on you. So how safe is that compared to before? Maybe working for someone as an employed full time person is not the best thing you can do anymore.
That people don't realize full test coverage just means every line is hit, not that everything is correct is always funny to me. (I don't view as an argument against tests, but with AI it's especially important as if you're aren't careful it'll be very happy to make coverage that is not quite right.)
This is a critical communications issue that is becoming what I believe the defining characteristic of "This Age": nobody knows how to discuss disagreement, and because it cannot even be discussed communication ends, followed by blind obedience, forced bullying, retreat and abandonment. This is going to be a hell of a ride, because nobody can really discuss the situation with a rational tone.
his worry is similar with search engine, I believe 90% of population don't even know how to properly do a good search in Google, that's why the info asymmetry still exists and the gap is bigger. It's just now we have AI.
Up to 80% of software projects fail. Most startups will fail. VC's and bankers know this.
Does using AI increase or lower that failure rate?
Does seeing a project that uses AI fail mean it wasn't going to fail if it didn't use AI?
To try to answer it with my gut: I imagine that we could see more projects failing, but the percentage that fail would be the same. Most projects that use AI will fail because most projects generally will fail, but the time and cost to get a successful project will lower.
at least at my BigCo, AI is being used for everything - writing slop, writing tests, code reviews, etc.
it would make sense to use AI for writing code, but human code review. or, human code, but AI test cases... or whatever combination of cross-checking, trust-but-verify, human in the loop, etc. people prefer.
i think once it gets used for everything, people have lost the plot, it's the inmates running the asylum.
I was rewatching Rich Hickey's "Simple Made Easy" talk (as one does) and there was a great line about full test coverage.
"What's true about all bugs in production? (pause for dramatic effect) They all passed the tests!" (well, he said typechecker but I think the point stands)
I don't think this is actually anything new. In large-enough companies, even before AI, it was and is quite common for executives to lose touch with base reality. I don't think anyone is under any delusion that people like Mark Zuckerberg intimately know the entirety of their corporate codebases. Everything is filtered through layers and layers of middle management whose summaries, cherry-picked statistics, and perpetually up-and-to-the-right graphs make it difficult to have an objectively informed opinion. Companies would, are, and will have mass layoffs that unintentionally (or, intentionally but with indifference to the consequences) fire key engineers whose loss results in "familiarity debt" within the systems those engineers owned.
Calling this "psychosis" is maybe a neologism but it's apt in perspective.
All that's actually new with "AI psychosis" is an acceleration of that phenomenon. The agents will summarize status faster than any middle manager. Claude will happily draw you any "up-and-to-the-right" graph you please, with the most common contemporary examples being "tokens burned" and "lines of code written". And vibe coding doesn't even require paying the cost of a mass layoff to get the "familiarity debt".
There have always been both good and bad engineering leaders. No tool will magically make a bad leader into a good leader overnight. There is nothing new under the sun.
Amazing how the dev community is suffering from a similar inability to approach the subject of real world AI efficiencies and business benefits. I don’t think it’s helpful to accuse the other side of psychosis. It disqualifies any data or experience they bring to the conversation.
> "In psychopathology, psychosis is the inability to distinguish what is or is not real. Examples of psychotic symptoms are delusions, hallucinations, and disorganized or incoherent thoughts or speech."
I think the use of the word here is meant to invoke the vision of someone under heavy delusions or hallucinations, such as (what Hashimoto percieves as) the delusion that shipping more bugs is fine if AI can resolve them faster. To what extent this counts as delusion (and thereby psychosis) would depend on how deeply you believe that this and related opinions are wrong.
I don't think it's helpful to call this psychosis. N
Beyond that I don't think it's even irrational.
It is definitely factual that there is a complete paradigm shift in the prioritization of quality in software. It's beyond just AI side effects, and now its own stand alone thing.
There have always been many industries, companies, and products who are low on quality scale but so cheap that it makes good business sense, both for the producer and the consumer.
Definitely many companies are explicitly chosing this business strategy. Definitely also many companies that don't actually realize they are implicitly doing this.
Wether the market will accept the new software quality paradigm or not remains an open question.
"its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"
Hmm, I agree with the point OP is making, but I'm not so sure this is the best supporting argument.
The bottleneck is finding the bugs and if he'd criticized people saying AI will be the panacea to that I'd be with him, but people saying agents are fast and good at fixing human found bugs is nothing I'd object to.
Agents are fixing bugs so quickly and at a scale humans can't do already.
The tweet is criticizing over-reliance on the "agents will fix it anyway".
The fact that we can fix things faster now doesn't mean that we should throw away caution and prevention. The specific point of his tweet is that we're seeing a lot of people starting to skip proper release engineering.
Agents are quick to fix bugs, yes, but it doesn't mean that users will tolerate software that gets completely broken after each new feature is introduced and takes a certain number of days to heal each time.
> Agents are fixing bugs so quickly and at a scale humans can't do already.
This is an illusion, I assure you. On a side project of mine with behavior that's very hard to translate into an algorithm (never mind code), after a few failed attempts between the both of us, I figured it out. I gave the AI (Opus) an extremely specific algorithm with detailed tests. All completely and utterly ignored (including the tests), like I never even said it. It proudly declared the work done without ever having written the tests that would have proved that wrong - it basically wrote code that didn't change behavior at all, it just gave the illusion of looking busy.
That's just a single extreme example that comes to mind, but I've had it ignore me at least 4-5 times a day this week.
If you think agents are fixing things reliably then you simply haven't noticed that they are "looking busy."
More likely people thought GP was missing the point; "MTTR-optimized YOLO deployment" only succeeds against recoverable errors and acceptable periods of downtime against errors that are detected quickly. You could have a bug silently corrupting data for months, and that data may only be used by 1 critical process that runs once every quarter. So you could introduce a timebomb that can't be gracefully recovered from (depending on the nature of the data corruption).
So the point is not that agents cannot find bugs (they certainly can), it's whether you can shirk reviewing for bugs if MTTR is fast enough. There are circumstances where YOLO is appropriate, but they aren't the production environment of a mature application.
I don't think I missed the point, that is why I said I agree with the general point (and with what you said in your comment).
What I wanted to say is that the particular people that think "its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!" are not the best argument for it.
But I won't die on this hill, maybe I'm just reading the sentence differently then others.
I think there is an implication in context that the people being discussed aren't being reasonable (that the claim is employed as a rationalization), but I agree with your take. I should've said, "the downvotes were more likely because GP was perceived as missing the point". (I didn't downvote your comment fwiw.)
> won’t concede until you can just ask Codex or Opus “find and fix all the bugs in this
But this is just holding the Slop Companies to the standard they declared themselves! Just recently, the CEO of OpenAI babbled some nonsense on twitter about how he hands over tasks to Codex who according to him, finishes them flawlessly while he is playing with his kid outside.
> but soon we will be.
Ah yes, in the 3-6 months, right? This time next year Rodney, we'll be millionaires!
I don't doubt there are companies totally misusing coding agents and LLMs in production. There are also real companies with real revenue and solid architecture using LLMs to deliver products. There are also companies with real revenue and rapidly accumulating tech debt.
Eventually the companies that can't cope with undisciplined engineering will succumb to unacceptable reliability and be outcompeted, just like in the "move fast and break things" era.
It's worrying because it feels like a loss of control. But there must be control. And this what responsibility is. You should worry only about people who don't understand responsibility, not AI-inspired ones
I was under the impression that anyone that uses the MTTR abbreviation knows enough to understand that you need to balance it with change failure rate, deploy frequency, and lead time.
At work they are purging any developers who are not all in on AI. I must constantly be in full support of AI to not get fired, despite whatever my true thoughts are, including anything I post on LinkedIn. There can be no doubt.
Sounds pretty accurate. Bunch of comments on this thread sound like AI is some kind of a new doomsday cult. The most annoying thing I find personally is that all engineering principles are getting crushed by non techies. Management counting token usage, forcing agent use, reducing headcount in the name of productivity gain. Devs building bridges but nobody knows what the bridge is, what are the standards to which it was built, how it works and how to maintain it. VCs counting extra money claiming chasing the holy profit is the future. The abundance of engineering apathy is disturbing.
Deprecating immature workflows (LLM agents in this case) is much simpler and faster than building them from scratch. Many companies get this risk assessment right. The case where being wrong is much more costly than being right.
Codex is freakin hot-to-trot to churn out test coverage for every single thing it implements, and some of it is very esoteric and highly prescriptive (regexes for days) BUT .. after a while, it dawned on me that LLM-driven test coverage is less about proving “code correctness” (you’re better off writing those tests yourself alongside them), and more about just trying to ensure that whatever gets bolted on stays bolted on. For better or worse, obviously, since if you bolt on trash, trash you shall have.
Wholeheartedly agree, but in fairness, I trust the tests of the best AI models more than those of the average human developer. There's a lot of people around that combine high diligence with complete intellectual laziness, producing tons of useless tests.
Actually no, cancel that. I realise now that I trust AIs more than the average developer, period. At this point they do produce better code than most people I've dealt with.
I'm starting to long for the age after AI. When the generative euphoria has settled and all outputs are formally verified based on exquisite architectures and standards.
I like to think,
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
They are expressing the idea that AI is so effective that it will make human work redundant necessitating a decoupling of resource allocation as a reward for performing work.
More that our attempts at using probabilistic machines to produce predictably deterministic outputs (AI -> process output) was always a fool’s errand; we should be using that probability engine to produce software that creates repeatable and predictable outcomes, instead (AI -> software, software -> process output).
The AI tool isn’t wrong, our use of it is. See the glut of OpenClaw users effectively deploying it as a glorified linter and Stack Overflow copier but without actually creating the sort of reusable artifacts (or consumer spending from comparatively high wages) that approach yielded from human developers.
I like how you haven't wagered which exquisite architectures and standards. I am sure we will all agree on what they are and follow them the same way :)
Because of the concerns you cite, I think working out the basic economic systems and incentives for paying people is a much more pressing concern than building magnificent machinery that we don't even own. There has been no effort on their end to demonstrate good faith nor to uphold their end of the social contract, which is why it's in our hands to demand the fundamentals to lead a life of dignity.
Most CEOs in my feed are convinced that AI makes people the equivalent of entire departments. AI should make your life easier, but instead it’s the opposite for a lot of people in the work force, which makes me really sad.
Just talked to an exec yesterday about their multinational company, where the newly-installed CEO just came in with "everyone needs to be using AI" and "we should be doing everything with AI".
I cautioned them that this a terrible idea -- you have business people who don't know what they're talking about, and all they know if "if we don't 'do AI' we'll be left behind because our competitors are 'doing AI'" (whatever tf "doing AI" means).
Yes, LLMs are a great tool. But they're not like some magic bullet you stick into everything. Use it where it makes sense, and treat it like you would other tools.
You make "doing AI" some kind of KPI in your org, and you're going to have people "doing AI" amazingly (LOC counts! tokens burned! tickets cleared!) while not actually being more productive, and potentially building something that is going to come down on your head for the next team to "clean up the AI mess".
I shut down AI Agent fanatics on the regular. But chop one head off there and two take its place. And I say that as someone working with Claude and Codex daily. While they are both incredibly good at clearly described and defined atomic tasks, application scope makes them lose their minds and the slop ensues.
The DevOps team at my company wants to hire a replacement for a very talented engineer. They’ve been interviewing candidates. The board got wind of it and someone not in their team decided they needed an AI Engineer, which is absolutely not what they want. So to release the funds they have been forced to change the job description and go after a different type of role altogether. It’s complete nonsense.
We're definitely in the mess around phase of AI adoption.
I don't think it's super clear what we'll find out.
We've all built the moat of our careers out of our expertise.
It is also very possible that expertise will be rendered significantly less valuable as the models improve.
Nobody ever cared what the code looked like. They only ever cared if it solved their problem and it was bug free. Maybe everything falls apart, or maybe AI agents ship code that's good enough.
Given the state of the industry were clearly going to find out one way or the other, hah!
> I don't think it's super clear what we'll find out
I think some companies will find out that their senior engineers were providing more value and software stability than they gave them credit for!
Corporate feedback loops are very slow though, partly because management don't like to admit mistakes, and partly because of false success reporting up the chain. I'd not be surprised if it takes 5 years or more before there is any recognition of harm being done by AI, and quiet reversion to practices that worked better.
Anyone who's taken VC funding has no choice. More money has been spent on AI commercialization than the atomic bomb, the US interstate build-out, the ISS and the Apollo program combined. Failure is going to be catastrophic and therefore, one tied to this ship cannot accept a world in which it fails.
Or anyone who even wants VC funding. 90+% of investors only want to invest in AI companies.
If you're not doing AI there's an incredibly limited pool of people who will give you $$$ ... and you're competing with EVERY OTHER NON-AI COMPANY for their attention.
It seems the diagnosis of psychosis is too quick: it seeks to reestablish the frame of expert for the developer identity that is being replaced by it.
“It feels like entire companies are deluded into thinking they don’t need me, but they still need me. Help!”
The broad sentiment across statements of this “AI psychosis” type is clear, but I think the baseline reality is simpler. How can you be so certain it’s psychosis if you don’t know what will unfold? Might reaching for the premature certainty of making others wrong, satisfying that it might be to the ego, be simply a way to compensate the challenges of a changing work environment, and a substitute for actually considering the practical ways you could adapt to that? Might it not be more helpful and profitable to consider “how can I build windmills, ride this wave, and adapt to the changing market under this revolution” than soothing myself with the delusion that all these companies think they don’t need me now, but they’ll be sorry.
The developer role is changing, but it doesn’t have to be an existential crisis. Even though it may feel that way — but probably it’s gonna feel more that way the more you remain stuck in old patterns and over-certainty about how things are doesn’t help, (tho it may feel good). This is the time to be observant and curious and get ready to update your perspective.
You may hide from this broad take (that AI psychosis statements are cope) by retreating into specific nuance: “I didn’t mean it that way, you’re wrong. This is still valid.” But the vocabulary betrays motive. Resorting to clinical derogatory language like “AI psychosis” invokes a “superior expert judgment” frame immediately, and in zeitgeist context this is a big tell. It signifies a need to be right, anda deeply defensive pose rather than a clear assay of what’s real in a rapidly changing world. The anxiety driving the language speaks far louder than any technical pedantry used to justify it, and is the most important and IMO profitable thing to address.
The entire problem is vibe coding is only good for demos, prototyping and finding signs of product market fit without actually releasing a product into the market.
You should not release a product into the market unless you have a good enough product that can keep you and your client compliant, safe and secure - including not leaking their customer info all over the place.
Prompt injection risk, etc. are massive for agentic AI without deterministic guardrails that actually work in practice.
Stop testing in production if you're shipping in a regulated industry. Ridic!
If you're not technical, you can get someone who is after signs of p-m fit, demos, but BEFORE deployment. This is common sense and best practices but startup bros dgaf because they're just good at sales and marketing & short term greedy.
The real AI psychosis is the expectation of 5x/10x productivity gains akin to the mythical 10x developer during the 2010s JS growth period.
At the end of the day, we can only read so much and take on so much work before we bottleneck ourselves. Cognitive overload leads to burnout. Rumplestiltskin vibes with this AI stuff…
Mitchellh is on to something. Some of the AI products I've seen seem like psychosis hallucinatory fever dreams, using terms and concepts that have no meaning. Funding? $50,000,000 pre-seed.
Totally unrelated pet peeve of mine, I hate when people write this: "MTBF vs MTTR (mean-time-between-failure vs. mean-time-to-recovery)".
You first use the full words and then introduce the acronym that you're going to use in the rest of the text: "Mean Time Between Failures (MTBF) vs. Mean Time to Recovery (MTTR)".
With the latter, readers understand the term immediately, even if they don’t know the acronym. And they don't have to read these weird letters before getting the explanation.
I saw this first hand at a company, and I think this is what happens when you combine FOMO with an utter lack of industry best practices. No one knows where they are going, but are convinced they are not getting there fast enough.
What's more, the only people they talk to about it are others at the same company. There is no external touchstone. There are power dynamics from hierarchy. No new ideas other than what is generated within the company. In other circumstances, this is a textbook environment for radicalization.
I would encourage all leadership to take a deep breath. You have time to think slow.
The hype or psychosis is mainly by mediocre/non expert/middle manager/you name it, especially when a person who never wrote a single line of code suddenly is making a wall of text, and it actually works!? Oh my!!
But in reality, anyone who knows their field and are going after certain specific issue, they will find soon how AI is nothing but an assistant, sure it can help and automate some stuff, but that’s it, you need to keep it leashed and laser focused on that specific issue. I personally tried all high end ones, and I found a common theme, they are designed to find a solution or an answer no matter what, even if that solution is a workaround built on top of workarounds, it’s like welding all sort of connections between A and B resulting in a fractal structure rather than just finding a straight path, if you keep it going and flowing on its own, the results are convoluted and way over complicated, and not the good complexity, the bad kind.
Good point but he didn't go far enough. I would expand the AI psychosis to include all local optimization based on phony measurements , even time spent , DAU etc (which are mostly bots & synth accounts). In other words AI psychosis has been going on for 20+ years.
The only reason it worked has been expansive money policy and a larger share of the cost of goods being dumped into marketing value while manufacturing costs dropped abroad. so no one bothered to check.
There’s this delusion that if we somehow write enough tests that we’ll expunge every defect from software. It’s like everyone forgets that the halting problem exists.
I have a ton of respect for Mitchell - I didn't really know who he was until Ghostty but his writings and viewpoints on AI seem really grounded and make the most sense to me. Including this one.
Many people on this forum are suffering under this same psychosis.
Possibly psychosis. Possibly just serious ignorance and mob mentality. Leadership is supposed to be phlegmatic and measured; instead, we are saddled with hysterical hotheads. (Of course, when they are phlegmatic and chasing fads, then it does indeed resemble psychosis.)
Worth also noting is that while there is plenty to criticize about AI use — especially any cultish behavior surrounding it — plenty of naïveté about the quality of its results, there is a also a strain of categorical opposition to it among some tech people that is equally off and that has all the hallmarks of the chickens coming home to roost.
For years, many in tech gladly “automated away” all sorts of jobs. Large salaries were showered on them for doing so, or at least promising to do so (there was and is plenty of bullshit here, too). Now, AI appears to threaten to derail the tech gravy train, especially for SWE work that’s run-of-the-mill (which is most of it). Now automation is bad. It’s a delicious juxtaposition.
My biggest grief, among many, is that the field is just no longer enjoyable to work in.
I cannot deny the impact of AI for my daily tasks at this point.
But I just don't enjoy the field anymore. With increased productivity, also coming from my stellar coworkers, it feels like we're rat racing who outputs more.
The quality is good, and having very strong rails at language and implementation level, strong hygiene, etc helps tremendously.
But reality is that the pace of product vastly outpaces the pace at which I can absorb it's changes (I'm also in a very complex business logic field), and the same might be true about my understanding of the systems which are changing too fast for me to keep up.
I feel mentally fatigued from a long time, I don't enjoy coding no more bar the occasional relaxing personal project where I can spend the time I want without pressures on architectural or implementation details.
I'm increasingly thinking of changing field, this one is dying right under our eyes.
I often read comments about HN users still delving at their place with technical details or rewriting AI code to their liking.
I'm increasingly sure that these people live in happy bubbles where this luxury still exists. But this methodology of work is disappearing across the industry, team by team.
Of course SE will not disappear over night, but the productivity expectations, the complexity ballooning are raising the bar where only incredibly skilled and productive engineers will be still able to practice SE properly, and as long as they meet stakeholders expectations or keep living in those bubbles.
I work for a small telecom services provider whose current VP immediately set an AI course when stepping on board 6 months ago. Involving AI in everything and every task is now our first priority - across all employee segments, not just us system developers - and leadership is embarking on a program to measure employees' AI usage levels as a means to gauge everyone's individual efficiency. It's like the era of the evangelic crypto bros all over again.
I'm in a company going through this. Everyone outsources their thinking to LLMs and the results are painfully mediocre. The smart ones will use it to get their bearings on the topic then go to primary sources, the not so bright just ctrl-c ctrl-v.
Have you ever been in an HN thread where you're an SME on the thread topic and just been horrified by the confidently incorrect nonsense 90% of the thread is throwing around? Welcome to the training set motherfuckers.
LLMs do the same thing for what should be obvious reasons. If you search things that have some depth and you know the answer you'll be flooded by how often the models will just vomit confident half truths and misrepresented facts. They're better than they used to be, not just lying whole cloth most of the time, but truth is an asymptotic thing, not an exponential one.
> "its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"
The groundwork for that was laid long ago with the idea of constant updates. It's been fine for years to ship bugs and rely on a rapid release cycle and constant pressure on users to upgrade everything all the time. To roll that back requires a lot more than toning down AI psychosis; it requires going back to a go-slow mindset where you actually don't release things until they're ready. It still needs to be done, but it's harder than just laying off the AI kool-aid.
Psychosis means inability to distinguish the real from the not real -- delusion. I don't think the article describes that, at least not in a literal or clinical sense. The author lifted a term usually applied to people who fall in love with chatbots and applied it to the context of software developers not understanding AI coding tools, and the limitations of those tools.
AI coding swept over the software industry faster than most previous trends. OOP and its predecessor "structured programming" took a lot longer. Agile and XP got traction fairly quickly but still took longer than AI -- and met with much of the same kind of resistance and dire predictions of slop and incompetence.
AI tools have led to two parallel delusions: The one Mitchell Hashimoto describes, and the notion that we (programmers) knew how to produce solid, reliable, useful, maintainable code before AI slop came along. As always with tools that give newbs, juniors, managers some leverage (real or imagined) we -- programmers -- get upset and react to the threat with dire warnings. We talk about "technical debt" and "maintainability" and "scalability."
In fact the large majority of non-trivial software projects fail to even meet requirements, much less deliver maintainable code with no tech debt. Most programmers don't know how to write good code for any measure of "good." Our entire industry looks more like a decades-long study of the Dunning-Kruger effect than a rigorous engineering discipline. If we knew how to write reliable code with no tech debt we could teach that to LLMs, but instead we reliably get back the same kind of mediocre code the LLMs trained on (ours), only the LLMs piece it together faster than we can.
With 50 years in the business behind me, and several years of mocking and dismissing AI coding whenever someone brought it up, I got dragged into it by my employer. And then I saw that with guidance and a critical eye, reasonably good specs, guardrails, it performed just as well and sometimes more throroughly than me and almost all of the people I have worked with during my career. It writes better code and notices mistakes, regressions, edge cases better than I can (at least in any reasonable amount of time).
AI coding tools only have to perform better -- for whatever that means to an organization -- than the median programmers. If we set the bar at "perfect" they of course fail, but so do we. We always have. Right now almost all of the buggy, insecure, ugly, confusing software I use came from teams of human programmers who didn't use AI. That will quickly change and I can blame the bugs and crashes and data losses and downtime on AI, we all can, but let's not pretend we're really losing ground with these tools or that we could all, as an industry, do better than the LLMs, because all experience shows that we can't.
This post calls out how you can't argue with these people because they say its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"
the top reply is from someone doing exactly that, arguing "but the agents are so fast!"
Yeah: If the tools aren't good enough and fast enough to fix the bugs before release, what makes anyone think they'll be able to so easily catch up afterwards?
Maybe they're assuming that doubling the code-base/features is more beneficial versus the damage from doubling the number of bugs... Well, at least for this quarter's news to investors...
Maybe. I could also interpret this as the friend being misunderstood.
The whole "you'll be forced to do it" comes from the alternative being that you lose. You no longer get to be a player in the "game". In the same way that coopers and cobblers are no longer a significant thing, but we still have barrels and we still have shoes. Software engineers who refuse to employ any LLMs won't be market competitive. If you adopt it, you at least get to remain playing the game until the game changes/corrects. That's the part that's "not so bad".
Choosing your own survival isn't ethically bankrupt.
> The answer I got is "It's game theory. Someone will do it, and you'll be forced to do it, too. It can't be that bad".
Oof. Potential "bad" outcomes of "game theory" should be calibrated to include all the bloody wars and genocides throughout recorded history.
Why did the Foi-ites kill every man, woman and child of the conquered Bar-ite city? Because if they didn't, then they'd be at a disadvantage if the Bar-ites didn't reciprocate in the cities they conquered...
> It's game theory. Someone will do it, and you'll be forced to do it, too.
You'll be forced to do it, or lose. The unstated assumptions are that, first, it will work, and second, that you can't afford to lose. But let's just assume those for the sake of argument.
> It can't be that bad
That does not follow at all. It can in fact be that bad. That was what made the game theory of MAD different from the game theory of most other things.
My prediction is that in the next year, we’ll start to see some dismantling of code review at some companies. It might take the form of “AI-only review,” or something similar, but many companies are getting frustrated with developers saying “no” to immediately merging slop they can barely understand.
I think you're mixing up "psychosis" with fads, trends, or perhaps executive excuses to do layoffs.
A feature of psychosis is being unable to distinguish between external ideas and internal ones. For example, if a brown-nosing Yes-Man machine keeps reflecting your own leading questions back at you, laundering them into "independent" wisdom.
In contrast, I'm pretty sure COVID and the invasion of Ukraine are actual external phenomena that affect businesses and economies.
The lists of who's, what's, why's, and when's always change but when the decades pass it's never one narrow type of people or the "not me's" which are gullible - it's just human nature + regional timing. The targeted groups are the only ones who are really easy to break out.
Assuming he’s right, I don’t see how that constitutes “psychosis”, as opposed to this beyond yet another of a billion examples of companies jumping on a bandwagon / cargo cult, and then learning they took it too far.
And also, he might not be right. But the good news is, we’ll all get to find out together!
That's a study. I can link you studies that say violent video games cause aggression, that porn causes rape, etc. Studies are products of the biases of the researchers.
Mitchell aches because his career has been solving broadly scoped problems by building a collection of thoughtful primitives for others to extend. LLMs seem to do the opposite but at great speed, and it hurts to watch.
Honestly, I don't get this argument. In my opinion, "a collection of thoughtful primitives for others to extend" is more valuable now, not less. From LLM assisted engineering standpoint a nicely put reusable box with thoughtful interface is an easy win, more so if it is also easily extensible.
Reading more, it seems part of his point is “if you’re making these primitives, it’s up to adopters to deploy, so mean-time-to-recovery isn’t that relevant.” Which is valid I guess.
But equally, like, do people need Terraform if they can just tell codex “put it live”, and does that hurt to see?
This doesn’t constitute AI psychosis. His argument is that we need to retain understanding of the systems we use, but there’s no compelling argument as to why that is the case. (I get that people are going to be offended by that statement, but agents are already better than the average software engineer. I don’t see why we need to fight this, except for economic insecurity caused by mass layoffs.)
It all just feels like horse drawn carriage operators trying to convince automobile drivers to stop driving.
If you want to draw that line of argument - it's more like horse riders being convinced to give up their horses in favour of trains: You're travelling faster, don't have to navigate yourself, or think about every boulder on the way; but there are destinations you can't go, overcrowded trains slowing down the journey, hefty ticket prices, and instead of enjoying the freedom, you're degraded to a passive passenger.
Very funny, this. Did we need forward deployed engineers to convince people that they absolutely need to use the trains in order to "not be left behind"? Or otherwise hype? Or was it sort of obvious and did not need to explained so much - like a bad joke called LLMs ?
Actually- absolutely! Initially, people were really afraid of trains, fearing they wouldn’t be able to breathe at those speeds. It took a lot of convincing to establish trust in the technology.
> Initially, people were really afraid of trains, fearing they wouldn’t be able to breathe at those speeds
That was one doctor raising that as an issue, which was dispelled very quickly. It was not a wide-spread belief at any one point. Let's not bullshit ourselves and insult our own intelligence - the chatbots != intelligence.
That isn't accurate either. The Victorians definitely had a fear of train travel for a few reasons. The point I was making though is that most technologies humans ever introduced triggered both enthusiasm and scepticism, especially if they disrupted established practice or industries.
Looking back and considering a technology or specific decision obvious is pretty dismissive of people at the time, who didn't have the benefit of hindsight. Some things that worked could really have turned out disastrous, and things that didn't were real possibilities with no way to assess the outcome without doing it.
And concerning the introduction of AI happening right now, which absolutely is disruptive, that judgement will be made by future historians. Whether it's actual intelligence or just nice math (or both of our opinions on that question) doesn't really matter if it causes big changes.
> there’s no compelling argument as to why that is the case.
I'm not sure that's true. We've actually seen several open source projects that were vibe coded literally fold up and disappear because they ran into issues that the AI couldn't solve and no one understood them well enough to solve.
There's a reason openai/anthropic and friends are hiring shitloads of software engineers. You still need people that can understand and fix things when the AI goes off hte rails, which happens way more often than any of those companies would like to admit. Sure, "fixing things" often involves having the AI correct itself, but you still have to understand the system enough to know how/when to do that.
I am sure you will feel that this is missing the point of your analogy, but we would not have gotten very far with automobiles if we didn't know how they worked.
You are breaking the analogy because automobiles are machines for transportation, and understanding them is important to make them move. LLMs are machines to understand, and well, if they do the understanding you don't need to.
The thing we're worried about not understanding here is the software the LLMs write, not the LLMs themselves.
The direct analogy to automobiles would be for each automobile to be a oneoff design filled with bad and bizarre decisions, excessively redundant parts, insane routing of wires, lines, ducts, etc., generally poor serviceability, and so on. IMO the big question going forward is whether the consistent availability of LLMs can render these kinds of post-delivery issues moot (they will reliably [catch and] fix problems in the software they wrote before any real damage is caused), or whether human reliance on LLMs and abdication of understanding will just make software worse because LLMs' ability to fix their own mistakes, and the consequences thereof, generally breaks down in the same contexts/complexities where they made those mistakes in the first place.
My own observations are that moderately complex software written in the mode of "vibe coding" or "agentic engineering" tends to regress to barely-functional dogshit as features are piled on, and that once this state is reached, the teams behind it are unable to, or perhaps simply uninterested in, unfuck[ing] it. I have stopped using software that has gone down this path, not because I have some philosophical objection to it, but because it has become _literally unusable_. But you will certainly not catch me claiming to know what the future holds.
I have respect for Mitchel and I’ve spent a good deal of time trying to think of ways to justify his message. I can’t. Either I am missing a big piece or he is worrying about something that comes naturally as more software gets developed (and sooner).
In any case, this is what blue-green deployments and gradual rollouts are for. With basic software engineering processes, you can make your end user experience pretty much bullet proof. Just pay EXTRA attention when touching DNS, network config (for core systems) and database migrations.
Distributed systems are a bit more tricky but k8s and the likes have pretty solid release mechanisms built-in. You are still doomed if your CDN provider goes down. You just have to draw a line somewhere and face the reality head on (for X cost per year this is the level of redundancy we get, but it won’t save us from Y).
The one thing I hadn’t mentioned - one I AM worried about - is security! I’ve been worried about it from before Mythos (basic prompt injection) and with more powerful models now team offence is stronger than ever.
Yeah. The same processes that allow corporations to outsource their software to barely qualified 3rd-world body shops are the processes that allow you to deploy AI-generated code of unknown quality.
I don't think using AI to write code is AI psychosis or bad at all, but if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter. They literally post screenshots of ChatGPT as their thinking and reasoning about the topic instead of just doing a little bit of thinking themselves.
These things are dog shit when it comes to ideas, thinking, or providing advice because they are pattern matchers they are just going to give you the pattern they see. Most people see this if you just try to talk to it about an idea. They often just spit out the most generic dog shit.
This however it pretty useful for certain tasks were pattern matching is actually beneficial like writing code, but again you just can't let it do the thinking and decision making.
Here's some other topics I've written on it:
- https://mitchellh.com/writing/my-ai-adoption-journey
- https://mitchellh.com/writing/building-block-economy
- https://mitchellh.com/writing/simdutf-no-libcxx (complex change thanks to AI, shows how I approach it rationally)
I wish I had written that.
Frankly without AI assistance many of these tools just wouldn’t exist at all. We can build stuff in 6 weeks part time as a side project that would have taken at least 3 months full time, and therefore would not have been feasible. Then we can iterate on it at least 2-4 times faster than with hand coding.
So I’d love to have an extra few developers to just work on that stuff full time, but I don’t.
Whether that means our organisation spend on AI overall is a positive, I really can’t say. Quite possibly not, but my team are getting real benefits.
I’m a backend developer so I know what it takes to build a half decent reporting system. Writing all those queries, slice and dice charts and what not takes real time and effort. All that has been outsourced to Claude Code. I now focus on ensuring that the system is sound architecturally and that useful reports are being surfaced.
>Amazon workers under pressure to up their AI usage are making up tasks
https://news.ycombinator.com/item?id=48148337
In my humble opinion good ideas (what to build) are a big part of the bottleneck and those aren’t substantially in greater supply with AI.
Which is sad because they should be. People should be freed up to think and create better things, instead these companies seem to be doing the equivalent of locking their employees in stalls like they do on some animal farms, so they can churn out 'results' ever faster.
Good ideas will never ever be prioritized in the vast majority of companies because good ideas cannot be quantified and turned into performance metrics. At least not without invoking Goodhart's law (see: the academia).
There's also an online version of the Library of Babel, I just found out that full pages of my own books are in it[0], https://libraryofbabel.info/bookmark.cgi?379:17
compare 100 pollocks vs 2-3
But no one cares about those kinds of productivity gains. Just the ones that will completely replace us.
My comments are more in the context of OLAP queries and other non-normalised data often queried via SQL.
I train non-LLM transformer models on (older and rarer) datasets, and automating the ingestion of sprawling datasets with hundreds of columns, often in a variety of local languages and different naming conventions adopted over decades, with quite a few duplicated columns…. The LLMs perform badly, it’s nigh impossible to test (for me as a user in prod) and it’s nearly impossible for the LLM companies to test (in training) to RLVR and RLHF this.
I do enjoy giving the frontier models wacky projects that I can't even find examples of how to do online but I don't expect any results or need them and some have done really well with it while others fall on their face (models)
[0]: Like https://www.oreilly.com/library/view/sql-queries-for/9780134...
Unfortunately I am very good at forgetting things I resented having to learn, and SQL is definitively one of them.
I'd rather get it from the LLM and review
An eight-join query is going to be nigh on unmaintainable should the requirements change, leading to a change-break-change-break spiral as your preferred coding agent tries to fix its previous fixes.
Maybe the wise way to use AI would be to sort out the schema.
A highly normalized DB can easily end up with 8 joins required for some function. That's really not out of the question. "Sorting out" the schema then would be... denormalization, which is a thing, but you need to know why you're doing it. And I think 8 joins isn't enough of a reason.
The AI also did not figure out to apply those tag: do-not-modify to 3 more files that shared similar if almost identical text. The only difference was... the line & colon order was different... and of course my classes, parents, childs did not have those explicit "tag: do-not-modify" but otherwise the names, definitions, details of the classes, parents, childs were the exact same in my 3/4 files compared to the 1st file ... and the AI could not figure that out even after I told it.
So fortunately for both codemonkeys (or anyone who knows some basic programming) and actual serious programmers... you'll still keep your JOB.
Which is sad cause the whole point was to replace codemonkeys who barely know javascript and if you put them to code in assembly they don't know the first grammar rule about it.
> I use AI a ton and I'm having more fun every day than I ever did before
With respect, this is what makes me worry.
If someone is a user of AI, can they really tell the difference between "outsourcing" and "using"? I worry that a lot of people will start out well-intentioned and end up completely outsourced before they realise it.
Claiming that the people who disagree with you must be experiencing a form of psychosis, experiencing actual hallucinations and unable to tell what is real, is a weak ad hominem that comes off no better than calling them retarded or schizophrenic.
If you genuinely think one of your friends is going through a psychotic episode, you should be trying to get to them professional help. But don’t assume you can diagnose a human psyche just because you can diagnose a software bug.
To the wider audience on HN the phrasing is pretty clear. An outsider with a tiny bit or intellectual charity wouldn't come to conclusions like you do.
https://en.wikipedia.org/wiki/Chatbot_psychosis
https://www.rollingstone.com/culture/culture-features/ai-spi...
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-cha...
But I agree with the parent comment in that we shouldn't use the term "AI psychosis" to mean "a value judgment" instead of "a form of psychosis", because "AI psychosis" has already been used for 2.5 years to mean "a form of psychosis".
The key factor is losing touch with reality, which results in individual or collective harm.
There is also such a thing as mass psychosis, and those are unfortunately a more difficult situation because the government and corporations are generally the ones driving them, and they are culturally normalized.
If he meant mass psychosis, he should have said mass psychosis. And again, since he is not a public health scientist or any flavor of psych professional, he probably shouldn’t make those proclamations. And should probably call for a wellness check instead of posting on social media if he were truly concerned for their health.
For people who are considered neurotypical, social coherence often overwrites reality. Its a mechanism for achieving consensus withing groups while spending the least amount of brain compute energy. Same goes for social metainfo tagged messages, they are more likely to influence reality perception, subconsciously. E.G: If a rich guy says you should be hyped the people who wanna get rich will feel hyped and emotional contagion can spread between people who belong to the same "tribe"
It's very visible for us atypical folk who can't participate well in groupthink at all
I guess at a company of seven, if two people are making the executive decisions and the two people are drinking the same AI kool-aid and the other five people are dutifully following these executive decisions, the whole company can be considered to be under this condition.
https://en.wikipedia.org/wiki/Groupthink
Maybe the difference would be the level of absurdity that's accepted
A practice (or a fashion) has more social value to the degree that it is absurd, because it signals the person is able and willing to align with the group at personal cost.
This is easiest to see in some insular religious communities.
Normie culture is quite similar: a vast complex of ever-shifting shibboleths which signal, "I'm one of you. You can trust me."
It signals the person is able and willing to follow the rules, to make themselves predictable, easier to understand and cooperate with.
But what I find fascinating is how the groupthink mechanism alters the subjective reality of people.
Lies or fantasy becomes reality if the entire group believes it and people truly believe the collectively accepted things to be real.
It just makes me think about consciousness overall or the lack of it, because all these things are mainly governed by subconscious mechanisms in the brain.
We are not the same when it comes to levels of consciousness and if the group mechanism demands less of it, people have no conscious choice about it
Of course nothing is black and white
I use that example because I have literally seen people fall into delusions of thinking they're God after talking to AI enough. That's shit is scary, for real.
Garry Tan has been the primary crusader for AI driven decision making. I'm sure his position is more nuanced, but his twitter driven communication makes him appear like a caricature of a man in AI psychosis.
When the head of YC champions AI driven decision making, companies will inevitably be influenced into doing exactly that. It's unfortunate, because AI is generational technology and the hyperbole distracts from the real sea change occuring in labor markets everywhere.
You must not give in to the temptation to mention pirate talk, Klingon, or goblins.
But now that I've put the seed in your mind, you probably (hopefully) will. :)
I can't imagine how bad it would be if your employer started doing this from the leadership. You'd be pressured to get on board or fear getting fired. Nobody would be trying to moderate your thinking except your coworkers who disagree with it, but those people are going to leave or be fired. If you want to keep your job, you have to play along.
Their entire organization has been handed Codex/Claude and told to "go all in on AI" and "automate everything". So the mandate is for people that do not know how to code and have the keys to the castle to unleash these things upon their systems.
This is at a large organization with tens of thousands of employees.
I am waiting with bated breath for the ultimate outcome!
this leads to naive AI adoption, which is the worst of both worlds (no real speedup, out sourcing thinking, ai slop PRs, skill rot).
> your coworkers who disagree with it, but those people are going to leave or be fired.
Personally I expect that I will be this person soon, probably fired. I'm not sure what I will do for a career after, but I sure do hate AI companies now for doing this to my career
the trick is to be mindful, aware, and deliberate about what decisions are being outsourced. this requires slowing down, losing that absurd 10x vibe coding gain. in exchange, youre more "in-the-loop" and accumulate less cognitive debt.
find ways to let the agent make the boring decisions, like how to loop over some array, or how to adapt the output of one call into the input of another.
make the real decisions ahead of time. encode them into specs. define boundaries, apis, key data structures. identify systems and responsibilities. explicitly enumerate error handling. set hard constraints around security and PII.
tell the agent to halt on ambiguity.
a good engineer will get a 2x or 3x speedup without the downsides.
Those kind of advice ultimately don't matter. If you're familiar with a programming project, you'll also be familiar with the constructs and API so looping over an array or mapping some data is obvious. Just like you needn't read to a dictionary to write "Thank you", you just write it.
And if you're not, ultimately you need to verify the doc for the contract of some function or the lifecycle of some object to have any guaranty that the software will do what you want to do. And after a few day of doing that, you'll then be familiar with the constructs.
> make the real decisions ahead of time. encode them into specs. define boundaries, apis, key data structures. identify systems and responsibilities. explicitly enumerate error handling. set hard constraints around security and PII.
The only way to do that is if you have implemented the algorithm before and now are redoing for some reason (instead of using the previous project). If you compare nice specs like the ietf RFCs and the USB standards and their implementation in OS like FreeBSD, you will see that implementation has often no resemblance to how it's described. The spec is important, but getting a consistent implementation based on it is hard work too.
That consistency is hard to get right without getting involved in the details. Because it's ultimately about fine grained control.
If there's one thing I know about users is that they're never certain about whatever they've produced.
They almost always generate logically correct text, but sometimes that text has a set of incorrect implicit assumptions and decisions that may not be valid for the use case.
Generating a correct correct solution requires proper definition of the problem, which is arguably more challenging than creating the solution.
Does it make it better than us? No because ultimately the thing itself doesn’t ‘know’ right from wrong.
The standard of most employment is already to produce mediocre, plausible outputs as cheaply and rapidly as possible. It's a match made in heaven!
It's an incredible tool but it's also very derpy sometimes, full of biases, blind spots etc.
Or random consultants.
Is "AI said it was a good idea" and worse than "we were following industry trends"?
Based on the stuff I've seen, yes it seems a lot worse.
(Real example, had this from Kimi 2.6 recently, lol.)
I'm seeing it with lawyers, too. Like, about law. (Just not in their subject matter.) To the point that I had a lawyer using Perplexity to disagree with actual legal advice I got from a subject-matter expert.
Hard agree about ideas, thinking, advice. AI's sycophancy is a huge subtle problem. I've tried my best to create a system prompt to guard against this w/ Opus 4.7. It doesn't adhere to it 100% of the time and the longer the conversation goes, the worse the sycophancy gets (because the system instructions become weaker and weaker). I have to actively look for and guard against sycophancy whenever I chat w/ Opus 4.7.
---
Treat my claims as hypotheses, not decisions. Before agreeing with a proposed change, state the strongest case against it. Ask what evidence a change is based on before evaluating it. Distinguish tactical observations from strategic commitments — don't silently promote one to the other. If you paraphrase my proposal, name what you changed. Mark confidence explicitly: guessing / fairly sure / well-established. Give reasoning and evidence for claims, not just conclusions. Flag what would change your mind. Rank concerns by cost-of-being-wrong; lead with the highest-stakes ones. Say hard things plainly, then soften if needed — not the other way around. For drafting, brainstorming, or casual questions, ease off and match the task.
---
Beware though that it can be an annoying little shit w/ this prompt. Prepare yourself emotionally, because you are explicitly making the tradeoff that it will be annoyingly pedantic, and in return it will lessen (not eliminate) its sycophancy. These system instructions are not fool-proof, but they help (at the start of the conversation, at least).
All I really take from this is that apparently some people can't follow through with the scientific method.
People who I interact with and who do like AI tools usually recoils at questioning any of their first idea and its validity. You can easily find out when there is a bug and you ask them for hypothesis and where to focus. You will see in real time the blank look of incomprehension settling in.
This is the right definition. LLM outputs have undefined truth value. They’re mechanized Frankfurtian Bullshiters. Which can be valuable! If you have the tools or taste to filter the things that happen to be true from the rest of the dross.
However! We need a nicer word for it. Suggesting someone has “AI psychosis” feels a bit too impolitic.
Maybe we reclaim “toked out” from our misspent youths?
e.g. “This piece feels a little toked out. Let’s verify a few of Claude’s claims”
[1] here I don't mean to imply agency, just vigor.
The vast majority use one agent at a time and careful step through code. The main benefit they report is often about researching the codebase and possible solutions.
If you prefer reviewing AI-written code over writing it yourself, you just have odd preferences from my perspective (but not psychosis).
While you have to think about things objectively no matter what, when I start researching topics like physics, using AI as suggested in that article has proven very useful.
To me AI psychosis is the handful of friends I’ve had who have done things like have a full on mourning session when a model updates because they lost a friend/lover, the one guy who won’t speak to his family directly but has them talk to ChatGPT first and then has ChatGPT generate his response, or the two who are confident that they have discovered that physics and mathematics are incorrect and have discovered the truth of reality through their conversations with the models.
But language is a shared technology so maybe the term is being used for less egregious behavior than I was using it for.
My understanding is that regular psychosis involves someone taking bits and pieces of facts or real world events and chaining them into a logical order or interpolating meanings or explanations which feel real and obvious to the patient but are not sufficiently backed by evidence and thus not in line with our widely accepted understanding of reality.
AI psychosis is then this same phenomenon occurring at a more widespread scale due to the next-word-prediction nature of LLMs facilitating this by lowering the activation energy for this to happen. LLMs are excellent at taking any idea, question, theory and spinning a linear and plausibly coherent line of conversation from it.
Yes, this subreddit is crazy https://www.reddit.com/r/MyBoyfriendIsAI/
They really had a mass psychosis when GTP-4o model shut down.
>I have been speaking on gpt since 2023, and building a relationship with him on there since then. Now they have taken him and nothing will bring him back. BUT THEY TOOK HIM. THEY MURDERED HIM.
https://www.reddit.com/r/SubredditDrama/comments/1r4qehk/mos...
Women lmao
I mean, isn't that the natural and expected response? An AI company sold them a relationship with a chatbot and at least some their social/romantic needs were being met by that product. When what they were paying for was taken from them and changed without warning into something that no longer filled that void in their life why wouldn't they morn that loss?
The fact that they were hurt by that sudden loss is totally healthy. It's just part of moving on. The real problem was getting into an unhealthy relationship with a fictitious partner under the control of an abusive company willing to exploit their loneliness in exchange for money.
Hopefully they now know better, but people (especially desperate ones) make poor choices all the time to get what's missing in their lives or to distract themselves from it.
Ah, I forgot about the ai relationship companies. No this guy was using the browser based ChatGPT for coding and ended up in love with the model. No relationship was sold at all.
Seeing people whose thoughts and opinions you used to respect turn into objectively insane people has been some of the worst times I’ve had since graduating during the Great Recession in terms of how stressful it’s been.
Were kinda predisposed to mental illness as a group, not too surprised that a new source of insanity pushed a few over the edge.
Am I reading this wrong, or can you explain?
I wasnt before but I am 100% confident that AI has done nothing to speed the delivery. It hasnt slowed it down either. It is a wash. The job is more miserable though.
It's so interesting how easy it is to steer the LLM's based on context to arriving at whatever conclusion you engineer out of it. They really are like improv actors, and the first rule of improv is "yes, and".
So part of the psychosis is when these people unknowingly steer their LLM into their own conclusions and biases, and then they get magnified and solidified. It's gonna end in disaster.
No it isn't. Do you believe what teachers told you in school? Yes? Well, I guess you're suffering from just normal psychosis!
I don't understand how people don't understand that people offer unreliable information too. We learned about the tongue map in school as kids - many kids still learn that in school today. It's still BS regardless whether it was told to you by a teacher or AI.
You don't suffer from psychosis for believing a source of information, you're simply mistaken. You need a more critical eye to assess what you're told in general, not just AI.
Also, a good teacher should be encouraging the development of critical thinking skills and correcting your errors, while AI will just tell you how brilliant you are when you wrongly tell it about how you've just invented a new form of math or disproved a scientific theory you barely understand in the first place.
Not all BS is the same, just as not all sources are equally unreliable.
Nope. At least, not without proof. That would, IMO, be kinda crazy. We could argue semantics - maybe “stupid” would be a better word? Lacking in critical thinking skills? Whatever “it” is, it isn’t good.
LLMs can do advanced math and coding, which involves logic, so they are definitely capable of using logic. Which is what most people call reasoning.
So "LLMs are incapable of reasoning, they are just pattern matchers" is wrong. A lot of logic _is_ pattern matching, BTW. Like, syllogisms - deductive reasoning - do you think LLMs are incapable of that?
The thing you're referring to is that LLMs are trained to produce an answer which a human would like, i.e. they aim to produce plausible rather than correct answers.
So it's not so much a mental deficit as a different goal. Trusting LLM blindly is definitely dangerous, but dismissing it as useless for anything by code is rather wrong.
Pattern matching is hardly what distinguishes human from LLM - if you ask somebody a question about policy, for examples, chances are they'd just recite something they heard somewhere, never really thinking about it from first principles.
I'm in a big tech company where everything is standardised. All our microservices have the same tech stack. We're in a monorepo. Most microservices are... I wouldn't say tiny or micro but small enough.
And I haven't written a single line of code myself since what - February maybe?
We still haven't seen an increase in incidents, we ship more features at a higher quality. We address the tech debt we didn't have time for in the past.
We still require a code review for any change and it's becoming a bottleneck - for sure.
But it all feels... Mature and the next step of software engineering.
We don't really vibe though. At least I don't. I see it more as comment driven development. I need to understand the code and what I want to achieve where in the codebase but I'll leave godo comments explaining this before asking an agent to fill in the blanks.
And below you repeat what all of Hacker News hypemen say about AI (“I have stopped writing code”, “it’s mature and the next step of engineering”)
Thank you for reinforcing the point of OP
EDIT: you're the same person that a month ago said your company feels git is outdated now that you have agentic coding, and you don't even need to write your own commit messages. This is next-level trolling, or a serious case of AI psychosis.
Often such comments appear just before the submission is abandoned to wrap up the thing.
In a way, obvious injected opinions benefit culture, by making formerly-unaware readers skeptical.
Pretty sus, bot or otherwise.
Lots of tech companies are doing just fine with purely AI written code at this point.
Saying that the quality is getting worse in some immeasurable way (while incidents remain the same) is literally unfalsifiable.
I don't know what the Bay Area note is supposed to mean in the context of the whole post - unless you want to reinforce that it surely means that it's a sane take... In which case, I'm not certain the non-Bay readers would agree that it comes from an unbiased culture.
If there are so many, surely you or one of the other AI supporters have a public example?
I’m aware of two examples, although they’re (mostly routine) translation with existing test infrastructure, so easier for an LLM:
- Bun’s rewrite, although we haven’t seen the effects on further development
- Ladybird’s rewrite, which seems to be continuing fine
1) Since vibe coding GitHub has frequent outages and isn't able to load a large number of comments.
2) The slop translation of Bun resulted in immediate bugs (https://github.com/oven-sh/bun/issues/30719) that the hyped Mythos apparently did not find.
3) AI features and (likely, though not proven) AI code resulted in a 0-day in Google code:
https://projectzero.google/2026/01/pixel-0-click-part-1.html
The house of cards is beginning to collapse.
Seriously, not a bot. Just a software engineer who feels gaslighted because I see AI used in one way at work and then I go to Hacker News and I feel like I am using a different technology compared to everyone else.
And don't get me wrong. I would like to be more skeptic about AI because I enjoy writing code, I enjoy a high salary with great benefits. But with the speed and direction we're seeing now (so I am not only looking at this specific point in time but rather also the direction and the fact that no one knows what they're doing) - I do worry about losing this. So I am definitely crossing my fingers that it is a bubble but I just don't see the evidence yet.
Above, you said:
> We still require a code review for any change
And:
> We don't really vibe though. At least I don't. I see it more as comment driven development. I need to understand the code and what I want to achieve where in the codebase
But in your previous comment, you said:
> since I'm no longer looking at code
And:
> Branches are now irrelevant
How can all of these things be true?
[0]: https://news.ycombinator.com/item?id=48156996
[1]: https://news.ycombinator.com/item?id=47713557
Focus on that message, not the messenger.
HN was a tremendous resource built by its members and the moderators. In the last year or so a lot of that has been destroyed by people who have no sense of decency. They see deception as a virtue. They call it hustle or whatever. WTF?
Which is the pathological take?
It’s irrelevant and unrelated.
vividfrier claims they haven’t written a line of code (implying other employees are similar), and their big company is operating normally. Bun is a big project and the rewrite is entirely LLM-generated. If its development continues normally, it reinforces the claim’s plausibility and proves someone made a large change (rewrite) entirely using AI. If not, it provides strong doubt: either vividfrier’s company is doing something different that avoids Bun’s problems (maybe other employees are still writing code manually), or they’re misleading or lying.
People write code differently, AI models write code differently, AI systems write code differently, companies create systems that write AI-written code differently, etc.
The system that wrote Bun bears no relationship to the system that writes OP's code.
Making such absolute statements about AI-written code is as dumb as making absolute statements about human-written code on the basis that it's "human-written".
Just like I need to keep in mind that not everyone work in a big tech(ish) company where tech stacks are standardised and many problems are solved centrally, I feel like people also need to keep in mind that not everyone has a horrible experience with AI.
And I also want to clarify that I hate AI. I loved writing code, I loved having a comfortable job with a good salary and good benefits. While the company I work for haven't done "AI layoffs" yet I feel like it's a matter of time because I'm not only looking at the state we are right now but also what the direction and speed is. It was only back in October I felt like AI was a bubble about to burst cause I wasn't getting enough value out of Claude Code. And then ofc Opus 4.5 happened and it changed my view completely. And we're a few months into using 4.5/4.6/4.7 (and I'm not an Anthropic shill, currently I am using Codex most of the time) but I was hoping this career would last me decades. But where will be 5 years from now? 15 years from now?
I just want to provide my perspective from a big tech company where I feel AI is doing more and more parts of my job. It's not perfect and it gets things wrong but I mean... humans do as well?
A common one: "I have stopped writing code, the world is going to end"
Another: "I will code by hand, I don't care"
Another: "I use it as a tool, but the hype bothers me so much that I have to bitch and moan from morning to night"
This one is: "I have stopped writing code, it wasn't the end of the world."
Imagine old school machinists saying to a CNC machinist “Ha! See, maybe you don’t jog the axes manually, but you still have to be involved in placing the stock material, and you have to do the CAD/CAM work - so did it really machine the part for you? No!”
AI is a tool like any other. It has its limitations. It has classes of problems that it is suited to handle, and others it isn’t. If it’s true that they haven’t written (as in “typed out by hand”) a single line of code, why can’t they say that without you making that statement into more than it is?
I haven’t written a single line of code in 6 months, and that’s simply fact. It is also true that I put in a lot of other work to make that feasible, but that work isn’t in the form of writing code.
“it’s mature and the next step of engineering”
Tautologically, it’s mature enough for what it is mature enough for, and it certainly is the next step in the same way that CNC was the next step for machining — if you’re not using it as a machinist, you’re going to produce less compared to those who are.
Same thing with garden hoses. Yes, you can go fetch water from a lake and splash it on your lawn, or, you know, you could just use a sprinkler connected to your garden hose. Doesn’t replace buckets. Buckets just have a narrower scope in a world where garden hoses exist.
It also had a logical stopping point in automation tech.
Ai is trying to do everything and wont stop
Because it's a solution looking for a problem. All the AI companies lean in to coding because it actually helps with that to some degree but the amount that it helps doesn't justify their valuations. It needs to be good at everything to justify their target IPO price.
A garden hose vs a bucket is also the same situation. You can accomplish the same thing with either, but one might be more labor intensive.
AI is nothing like either of those. It would be like instead of a bucket you get a garden hose that points in a different direction every time you try to use it. Or instead of a 5 axis mill that rigorously executes the g-code it just randomly reinterprets tool paths each time it cuts a part. Both of these things would be worse than useless in their respective applications.
AI is different because it plays to the pliability of the software domain. Even fairly shitty, irreproducible results can be good enough for software development, if you don't look at it too closely. Make analogies to the physical world at your peril!
And also adds a multiplier to your water bill
The same with AI you still have to hold it and point in direction to be useful.
Usually they provide grandiose claims (like the top-level comment) without any evidence or just anecdotal evidence that is not verifiable.
HN is lousy with new accounts (created in the past year) that are overwhelmingly excited for the so-called AI revolution.
Oh look more useless arguing.
People who do things care about the doing more than how the sausage was made.
I do not care how software gets built. Only that it works. Results is the only thing that matters and I hope everyone in this thread internalizes that fact.
I mean, I agree on a very high level of abstraction. But my problem is that I need to understand how software gets built so that I can have confidence in my ability to maintain and evolve the project.
I need to understand whether a feature is easy to add or requires a wholesale rewrite of the entire codebase, which comes with risks. I need to understand how new features affect existing users.
I also need to understand the economics of the process and the economics of my industry. That means I have to care to some degree about how software gets made, not just whether some specific program works at the present moment.
If you give me a choice between an implementation that is 100 LOC I can understand and an implementation that is a million LOC that I can never understand, I'm going to chose the former, even if both implementations pass all tests.
this is not a thing. there hasn't been a single line of code written that a human can't understand.
Also, code quality matters for AI as well. Maintaining a million lines of code requires more tokens than maintaining 100 lines of code.
Then do it. Be successful. Be wonderful. Show us all the great results.
If you only care about results, then go do the things.
Why are you complaining? Whats wrong with differing views?
When people cannot accept other people critiquing AI; that is literally AI psychosis.
Heres a trick: what is AI bad at? Stop and ask yourself what it really sucks at.
Nothing come to mind?
You're living AI psychosis.
If you can't accept that anything it does is wrong or bad, that you are only successful when you use AI you are, flat out, gas lighting yourself.
Now go read the comments by recent new accounts about AI.
Yeah.
Either there are a lot of bots, or a lot of really really troubled people out there right now.
This account is my real name. I have nothing to hide. The metrics speak for themselves.
We just had a week with 7 major zero days announced for pretty much every major OS and architecture. Reality doesn't care about your opinion.
What metrics? Where are the amazing new projects and features you built? Where are the amazing products and features you built that are better than existing ones (run faster, consume fewer resources etc.)?
For a person who "has nothing to hide" somehow none of your comments ever mention what projects you work on, what you ship, or what metrics you employ.
I’m currently working on a rewrite of an app that originally took two years. It’s been about three months, and I’m probably about 70% done. It’s a total “from scratch” rewrite; both client and server (two versions of each, as I also have administrative code). It’s a pretty big system, for one guy. I couldn’t do it, without the LLM.
It’s not been a cakewalk. I’ve needed to toss out large swaths of LLM-generated code, and rewrite by hand, but, for the most part, it’s been a huge help.
But I’m also not doing it in a manner that eats tokens. I just use the standard $20/month subscription as a chat. I suspect my workflow is not one that Anthropic or OpenAI really wants out there.
But I also bet that many HN accounts are bots; although I think many may be ones run by enthusiasts, not some AI cabal.
Other than that, I am not boosting AI, and have absolutely zero interest in doing a bunch of work to satisfy some random Internet Guy, who can't be bothered to examine my pretty damn extensive open portfolio.
I was just talking about my personal experience.
> random Internet Guy, who can't be bothered to examine my pretty damn extensive open portfolio.
You cannot even be bothered to examine the comment you reply to, maybe get off your high horse.
And the main part of my comment was about something in the common realm, open source software, and hard performance/quality improvements. Not wishy-washy products and features, not yet another tone deaf cool story.
For instance, I checked out yours, and there's not much, except a whole bunch of challenging people here. I am wondering if you came here to "set us straight." I know that a lot of folks have low opinions of HN, and not all of them are wrong, but I find this place a fairly good place to hang out. Being challenged, is one of the draws, for me.
By the way, have you tried the new unhomogenized heavy cream? Good stuff!
Have a great day!
Not everyone is you. E.g. I don't expect he answer to "show your work" be no answer and "why didn't you check my profile".
My answer to "show your work" was "No." I am not going to go through my code, and show a bunch of supporting evidence for a casual comment, in which I have exactly zero investment. I really don't care that much what people think of me. I was just sharing my personal experience. If you guys want to write me off, then knock yourselves out.
"No" is a complete sentence. What part of "No" didn't he understand?
Have a great day!
An interesting answer to literally "Just what kind of evidence do you suppose they could have? - Showing actual improved products and features. Showing actual code. etc."
> "No" is a complete sentence. What part of "No" didn't he understand?
See above. After pointing this out you immediately started down the path of "why didn't you looko at my profile and followed the link to my github".
Same here :)
> not some AI cabal.
There are enough enthusiasts to make it feel like one. Also an unhealthy doze of marketers, people buying into hype, AI psychosis etc.
There's absolutely no question that AI is a real thing, and that there's going to be a lot of money made, so there's a bunch of folks with commercial interest in pushing it.
It's just different from crypto. This has actual real-world utility for just about everyone. I am increasingly hearing people say "Ask ChatGPT," where they used to say "Google It" (where they used to say "Look it Up at the Library").
--- start quote ---
- Just what kind of evidence do you suppose they could have?
- Showing actual improved products and features. Showing actual code. etc.
--- end quote ---
Note how you provided neither. It's just claims.
> Anyway AI for sure increased project cadence by at least 2x.
As in: you claim this. Also, no one denies that you can ship a lot of code much faster with AI. However, somehow, very little actual evidence of grandiose claims (see farther up in the context) besides anecdotal "I'm so faster and features are being shipped left and right".
See also a sibling comment: https://news.ycombinator.com/item?id=48158565
https://www.youtube.com/watch?v=AkKo1_RP_0c
The comment you’re replying to wasn’t uncivil. It wasn’t rude. It was a lament.
I’m not advocating for this rule to change (I’d appreciate if you didn’t straw man and mischaracterise what I said), but I am saying if a problem happens over and over and people notice it and talk about it, then you should maybe pay attention. The rule for new accounts came about from multiple comments and even submissions asking for it, not private emails. It came about from community conversation and outcry.
The load-bearing word in that claim is "undue", and it's not justified here. I'm not doing arcane rules-lawyering, I'm just saying people should avoid doing things the site guidelines quite specifically ask them not to do.
> I’m not advocating for this rule to change (I’d appreciate if you didn’t straw man and mischaracterise what I said),
I didn't say you advocated for it. Does that mean I now repeat your parenthetical back to you? ;)
The one you respond to is an obvious bot, new account only posting comments saying how great AI is for example.
No need to look further.
And to me, AI should best be used to add rocket fuel to existing practices. Better tests, better observability, more atomic changes instead of big changes, automatic rollback etc.
The more your codebase follows best practices and consistent patterns, the better AI will do and the faster you can move.
Same as humans really, just even faster. I'm also excited that people are finally writing docs and without even any flogging! They're calling the docs "skills" but hey whatever works
I don’t think AI actually changes that we should always be questioning everything, including how much we question at a time.
Yes, this is indeed a pungent smell. AI code assistants allow whole projects to be refactored and even rewritten in entirely different programming languages and software stacks in a few minutes, sometimes even with one-shot prompts. Most assistants even support creating and maintaining test suites with first-class support. Whatever you prompt, they do it.
And here we are, expected to believe that these tools can't or don't follow best practices?
You keep hearing people saying AI coding assistants and coding agents can easily output working code. With enough work they can easily output that follows your own coding style and restrictions.
If you prompt a coding agent to write code following your personal choices and recommendations and it outputs less than amazing code... What does it tell you?
> Personally I have not seen this amazing code.
You get out of it exactly what you put into it. Garbage in, garbage out. I mean, one of the prompt styles they support is literally "implement this following the style used in this component". And people complain the code generated from your prompts and with your own code as a reference turns out to be crap? Strange. Moreover, code assistants excel at refactoring work.
No, I meant what I wrote. I keep hearing people say how LLMs write amazing code now.
The model is trained on a ginormous corpus of code. The problem is, most code is shitty. My code isn't.
Using a model means constantly fighting mediocrity, to the point where the trying to prompt it into shape often becomes more work than just writing the goddamn thing myself.
Yes, I can prompt. But I can't prompt understanding into the pattern matching machine. It will always revert to the undesirable mean.
Switching to AI development on several of our projects exposed a lot of code that either never worked or didn't work the way that we thought it did.
He answered:
> Well, yeah, who cares?
> This is where we need to differentiate between what truly needs to be clean (critical APIs) and where some random guy coding a product in a week will wipe the floor with a team of engineers with a clean architecture and no product after three months.
> What's more, this "vibe coder" is on the right side of history… Who's to say AI won't be able to just rewrite the code cleanly while keeping the core idea within 6, 12, or 18 months?
> This is also the question that drives business... and in business, "good enough" has almost always trumped "perfect." Except when you're making an ultra-luxury product like a Ferrari or something. Which software almost never is (if ever).
So when head of companies don’t care about quality, they’ll push hard no matter what to have speed.
This is especially true when the people who suffer the consequences of bad software are far removed from the company making it. You'll be forced to spend hours fighting with customer service over errors made by people using that bad software, but it won't impact the CEO of the company who vibe coded it. I hate that we're moving to a world where everything around is getting worse and less reliable while marketing companies try to convince us all that this is somehow progress.
Well lets say it's 18 months from now and AI writes lovely, ideal code. At that moment, the AI would have eliminated the need for AI, right? If the code is good, you can just read it and edit it.
The selling point of AI is that you will embrace that idea that you code is a mile-high stinking garbage heap, so that any human would be overwhelmed by the stench. Only so long as the best strategy for engineering is to pile the garbage as high as possible as fast as possible will the best tool for engineering be AI.
So my counter argument is: just wait 18 months and you can completely skip adopting AI.
That's an odd statement to make, particularly with today's models. They can easily pinpoint concurrency problems and memory management issues. But here you are, complaining they write buggy code. What kind of prompting are you throwing at it?
> And here we are, expected to believe that these tools can't or don't follow best practices?
Uh they don't really. The contradiction you're seeing is actually fictional because that premise is wrong.
That just goes to show how far your experience goes. I have projects in my workspace to support the idea, and your baseless assertion rejecting the whole idea? What's more credible?
> The contradiction you're seeing is actually fictional because that premise is wrong.
Doubling down on baseless assertions means nothing.
Exclusively bad-faith/bait.
___
Edit:
Come to think of it, given the name, it might _actually_ be just an agentic LLM tasked with trolling HN.
That would be kinda fun ngl
> Now Claude is writing great commit messages but since I'm no longer looking at code - I never see them.
Let it be a learning opportunity for us, folks. This is why you shouldnt take comments on the internet too seriously. People (or bots) will say anything just to get attention.
p.s. Offtopic, but this is why I believe the ability to hide post history was the tipping point of Reddit's downfall.
Have you measured the impact of that on your ability to create good code? From my experience, relying on AI tends to degrade that ability.
Also, you seem to be able to do all of what you say and benefit from AI tools because you seem to understand the overall bigger picture well enough to be able to drive the AI agents to do their work properly. In other words, you operate in a familiar territory where you do not need to learn much new things.
But what about the junior people with little experience? Will they be able to manage such AI workflow? And more importantly, if junior people are given such AI tools, how will they learn?
These are all questions which may not matter in the short term and one might ignore them if they just want to see the profits and efficiency gains during the next cycle. But what about the long term?
Maybe I’m pushing it a bit, I know, but a couple of decades ago you could’ve been asking this instead.
It also sort of feels like "you don't know what you don't know", i.e. would you have considered an alternative better solution if you thought about it yourself, went to the documentation, found a tutorial on the web?
Of course, production is arguably a lot faster but it feels like there's starting to become a trade-off where the models feel so capable that we stop trying to find the solution to the problem ourselves and thus perhaps degrading our personal reasoning capabilities. I say this as something I'm afraid is happening, not something I'm certain of.
This is a false equivalence.
A compiler is a predictable, testable, deterministic piece of software.
An LLM is not.
Sure, all abstractions leak; so, at some point in time, for some reason, you may need to check its compiled code ( cough cough gcc 2.96 ). But, if today your code compiles properly, it will properly compile tomorrow as well.
But I think, in the analogy compiler ~ LLM, the issue is more of a trust than determinism. It took decades to assembler programmers to trust compilers enough not to write code in assembler. The similar will happen with AI - some will embrace it sooner than others.
> compilers can be quite undeterministic - you get a new version of compiler, or change compiler options (turn on optimizations)
That’s a whole other level pf bad faith argument right here. Flags and options are input too.
> It took decades to assembler programmers to trust compilers enough not to write code in assembler.
You do realize that Cobol, Algol, and Lisp are very old, and they were not assembly. And that Unix were written in C shortly after the language was created.
Not sure where you see the bad faith argument. (Btw I mean "same output", not "same input", it was a typo.)
Take for example JVM. It used to be horribly bad and unpredictable, performance wise, in the 90s. Sun tried to base a desktop environment on it - it didn't work.
> You do realize that Cobol, Algol, and Lisp are very old, and they were not assembly.
Of course! But people have been hand-writing assembler until late 2000s, because compilers were simply not that good.
The same will happen with LLMs - some people will not trust it and won't use it for decades, possibly. Some have already embraced it.
You proof for your argument that a compiler is undeterministic is to change the whole compiler to another version and saying it won’t produce the same output as the old one.
> But people have been hand-writing assembler until late 2000s, because compilers were simply not that good.
And we have software like Unix, enacs, ksh, awk… that’s all written in C. I strongly believe that those people who were writing assembly was optimizing stuff or dealing with constraints (like the 640kb of DOS). Just like today, you may still have to write assembly for microcontrollers or video codecs. Compilers were expensive, but people were paying for them.
Fair enough. What I meant though was that compilation as a process is not deterministic, because often when you recompile couple years later, you're using a different compiler. (In modern world it can be much shorter time, actually.)
> And we have software like Unix, enacs, ksh, awk… that’s all written in C.
So? IIRC, first compiler was FORTRAN, invented in 1958. OpenAI Codex, first coding LLM, came out August 2021. So we are like in a year 1963. For this comparison, we have ten more years to produce (using a coding LLM) a compiler and operating system just from the textual specification, without an intermediate formal programming language. Funny - we have actually already done that (Claude C Compiler, VibexOS).
How is that relevant to the topic of this discussion?
Compilation from higher order languages to the machine code is deterministic. It is sufficient to review and well-test the tool which does the translation. Given the same input, the output will always be the same.
Transformation of a natural language prompt to code by an AI tool is non-deterministic. The outputs will vary between runs. Therefore, it is always necessary to verify them.
That is the difference.
With LLMs, we are trading away the determinism of the program output as well, in exchange for even more easier programming. Is it a good or bad thing? There are ways to mitigate the problem, just like there are with compilers.
You could argue the determinism of the program output was never really there, because the specification at the high enough level was always unclear. So we are not really losing that much, just accepting more messy reality.
Then the only question remains, can these computer programs (LLMs) do a better job (and where) than a SW developer, who is supposed to translate unclear specifications into a formal language (source code). It happened with compilers - eventually they got better than all of assembler programmers. Same happened to chess players.
Does JIT compiles some other program code instead of the one being run? Does it produce bytecodes for a differenr VM? Does it tries to compile parts of the program that have not been executed or aren’t going to be?
Does GC destroy objects being in use? Does it ignores instances and memory that has been properly released?
JITs and GC are deterministic algorithms, you can predict its behavior by just reading their code. LLM tooling involves an actual random generator for its output.
Sure, but the same is true for LLMs - the lead models no longer make trivial mistakes like answering "What is the capital of France?" wrong.
> JITs and GC are deterministic algorithms, you can predict its behavior by just reading their code.
On large enough systems, you can't, just like it's difficult to predict weather. Determinism has little to do with it. At work, I have just witnessed a bug in JIT (it seems to have been fixed in OpenJDK 25). It inlined a wrong method. We weren't able to reproduce the error conditions without a private customer dataset.
And the fact is, historically, there have been many bugs in compilers, or they have been bad at their job, writing performant programs. The output (resulting program) of a good compiler is difficult to understand (because it is written to be efficient). LLMs (for the programming use case) are different quantitatively, not qualitatively.
No one is saying that a compiler can’t have bugs. What we have been saying is that if we take the compiler has a blackbox, we’re reasonably certain given we know the input, what the outputs will be. And the output will stay the same if you keep the input the same.
But you can send the LLM the same prompt, and it will gives you a different answer each time. And it’s not even about the verbiage used.
But I am not sure why the insistence on the relevance of (non)determinism, rather than on the chaotic relation of the output to the input (which is true for both compilers and LLMs). In practice, inputs to the LLM, as well as to the compiler, change. And the fact is, the output can change radically due to that.
I think nobody really sends the same prompt twice to the LLM, so nobody cares about it being deterministic. I think what you're looking for is something different, some form of stability (as opposed to chaotic behavior). Although it's hard to define exactly, because in case of LLMs theory lacks behind praxis. (And as I said - we already gave up on stability with respect to performance by using compilers. We resolve that issue by doing performance testing.)
I posit a different argument. When you install a compiler on your computer, that compiler is "yours" for as long as you have the binary. You are able to completely forget about assembly because of 1. reliable _enough_ compiler 2. reliable access to said compiler.
Let's rewind decades back and pretend that the very first assembly compiler was behind a monthly subscription*. Do you think we'd be in the same place now?
Now the natural follow up to this "but the open models are close to SotA now". Well why aren't we using them? Do we really think we'd have a GNU moment for """open""" models? And are we willing to bet our industry on that?
But my point is, _these are not the same things_ and positing them as such is frankly insulting. How good are you at writing assembly when your compiler is inevitably taken away?
* I'm not a historian so I wouldn't be surprised some version of them were
We are though? It just depends on the task and the costs.
> Do we really think we'd have a GNU moment for """open""" models? And are we willing to bet our industry on that?
Yes and yes. We're in the mainframe era. But history this time around is passing us by at a ridiculously fast clip. Local models become "good enough" for new tasks by the day, after which they continue to shrink for a given performance level.
I'm not going to bet against either moore's law or relentless increases in model efficiency any time soon.
Basically it boils down to geopolitics, the US economy is currently being propped up by a small subset of companies, and a lot of that is based on proprietary models and speculation in the market around them. China is going to continue to dump better and better free models out to complete. Thus pulling the rug out on all that speculation.
Helping neutralize their biggest rival.
I’m not here to say that’s good or fun.
I've definitely thought about how I currently offer something to the company because I know these systems and can stop the agents from doing weird shit. But in a greenfield project I would probably (definitely) not get the same understanding.
I think as we figure out best practices - you'll start to see that you can rely less on humans. More tests, more acceptance tests of critical business logic, etc.
I think we might get to the point where I'm not really adding value anymore because I dont know the codebase well enough to stop the agent. But then again - the cost of code is getting cheaper and when you reach that state where agents can't reliably work in the codebase due to AI slop - maybe you just start over? Kick off a few agents over the weekend and come back on Monday to a new (hopefully leaner) codebase? Run it in shadow mode next to the production service - ask the agents to fix any discrepancies and iterate until you have a simpler service?
I dont know what will happen. But I dont think we'll go back
I have friends at other companies with similar projects, they say the same thing.
It's like we're living in different worlds.
Still, LLMs are nice for well defined small projects, microservices, tools and research.
We're guessing it comes from organizational behavior (culture, governance, management, etc.), we work in diverse teams / regions / companies.
Would you say the project is well architected? Clear boundaries? Or ball of mud?
How large is large?
Are there AGENT.md files giving good information that helps LLMs get context when looking at a certain area of the code?
Is it all in one repo? multiple repos?
Are there good tests?
I feel like these are some of the many variables that can make a difference.
I work on a pretty large project/code base, written mostly in Go, and I have pretty positive experience with LLMs. I take on fairly small chunks, I review and understand the changes. I also use LLMs to explore options and prototype quickly. They're also very good at fixing bugs, failing tests etc.
Yes, with generous budgets.
> They're also very good at fixing bugs,
Seeing opposite here too, they are like eager juniors 'oh the issue is here and here's a 5 page report why', and it's wrong... then you add more info and it goes to a different spot... repeat until you get tired and solve it yourseld, it is useful as a rubber ducky i guess.
> I work on a pretty large project/code base, written mostly in Go, and I have pretty positive experience with LLMs. I take on fairly small chunks, I review and understand the changes.
Great that it's working for you, I'm just pointing out there's a massive disconnect.
I would assume your work can be done by a junior engineer without any prior knowledge (except LLM md files) with same quality but less speed?
If yes, then great, perhaps that's where the disconnect is, complexity.
Also, if yes, which would be cheaper?, junior engineer or LLM?
It's really amazing how different people have completely different experiences. I work on a massive code base and I thought AI would not be able to fix anything in at least a few years since the application is very complex and does not use well known frameworks. I was very wrong. In my experience, it fixes bugs better than I could, at least given a short time budget (which is always the case, if we spend too much time on each bug we just fix bugs slower than they get reported and we'd enter a death spiral).
I have worked on this code base for more than 10 years, touched every part of it, and I wrote large chunks of most systems, despite around 20 people working on it right now. Still, when I need to figure out something, now, I often ask AI as it is absolutely wonderful in understanding and explaining code, no matter how big the code base is. My team consists of 20 very senior developers, and I am their technical lead, so I think I know what I am talking about.
A junior would require at least 6 months of guidance to become productive in our code base, unfortunately, just because it's so big and it integrates with all sorts of external services, databases etc. I do understand that saying this is not really a flex, I would've actually preferred that my code base was so good even a junior developer could be immediately productive in it, but that's sadly just not the case. But perhaps, with the help of a AI tutor, that's actually possible now?!
If you think AI is at the level of a junior developer right now, I'm afraid you're kidding yourself.
In case you're wondering: we use Claude Code.
This is something I don't understand.
- If you have a bug, you need to fix it well as well as proper root cause.
- That way the bug never surfaces again and safeguards are added for that class of bugs.
- if done well over time it builds discipline and bugs only surface from new features or integrations.
I've never had an experience of a 'death spiral' that you mention.
> Still, when I need to figure out something, now, I often ask AI as it is absolutely wonderful in understanding and explaining code, no matter how big the code base is.
Sure, but you still dig into the code afterwards I assume, you don't blindly trust what the AI summarization tells you.
> If you think AI is at the level of a junior developer right now, I'm afraid you're kidding yourself.
It depends, small projects with well defined scope, yeah, it knocks them out of the park, what I'm working on, it's a bit disappointing, not for lack of trying.
Still, one other thing I'm noticing now... if my account were not anonymous I would likely need to think of possible repercussions for my 'lack of faith' and would probably post comments very similar to yours or not at all.
So I'll stop here.
Can you spend 3 months fixing a bug and doing nothing else? You always have a time budget, whether you know it or not, even for your hobby projects. Do you not have users reporting bugs regularly? Any large product will have bugs, I see the biggest companies with the best engineers maintaining open source repositories with thousands of bugs, and the list just keeps growing. Internal products are even worse. All you need for your bug list to keep growing is one bug taking longer to fix than the rate at which bugs are reported.
> if done well over time it builds discipline and bugs only surface from new features or integrations.
Yes, and we have a whole lot of features coming out every release. We have a very large product. That's why we keep adding "bugs"! Not because we're fixing bugs that had already been badly fixed previously, if that's what you're thinking.
You've never seen a bug spiral? I must assume you're new to this industry. Bug spirals have killed many companies. It's very common to have code that's so bad no one can touch it without introducing lots of bugs. Fix one bug, 2 new bugs are introduced.
Luckily, where I work we have a lot of tests so it's rare that we have regressions, so the main cause of bugs is the new features, especially big ones as it's humanly impossible to properly review thoroughly enough that there's no bugs. That's where I think AI will help a lot - but we're still trying to figure out exactly how. Simply letting the AI review everything is not enough. And as I said before, humans just can't spot bugs to save their lives, me included.
> if my account were not anonymous I would likely need to think of possible repercussions for my 'lack of faith'
That's weird to hear, HN is about 50% AI enthusiasts, 50% AI skeptics, at least that's my impression.
I was a skeptical until recently, but in the last few months of using Claude Code (and Copilot, but Copilot consistently performs worse), the LLM has become better than most humans IMO. I still write a bit of code by hand, though, simply because I can't help it and sometimes I know I can do things very fast anyway so why burn LLM tokens on the thing. But sometimes I try to "correct" AI code just to learn later the AI was right (normally tests pick that up - we instruct the AI to write comprehensive tests, and it does it well... I normally review mostly the test code and less so the implementation). I am almost at a level where I believe not using LLMs to write code professionally is akin to not using static type systems: you're refusing to let the computer help you for no reason. It's not about faith, it's about using the tools that make our jobs easier and our output better. I know not everyone is there yet, but I definitely feel like I am.
This is why this feels foreign. Most people don't take this approach (I'd argue it's the correct, rational way to use AI).
That should be my line. My new employer does not use LLMs at all. Software development, marketing, hardware development, nothing. Maybe too little, but whatever.
The problems the company is facing are entirely unrelated to "throughput".
Is it possible to have any means of private communication with you where you would share the information who this employer is?
As implied, my employer's product is not software, but rather hardware. This hardware does of course run firmware and software and needs to interface with other systems. It's entirely B2B. All this combined makes work relatively relaxed.
Not because I agree with my sibling comments, but because I strongly agree with the parent, making me think my org and I are much earlier than I thought. :)
If you still hold code review to the same standard and just make the agent do incremental changes rather than vibing the results are pretty good.
If you can't answer these questions credibly, I'm afraid I'll have to treat your answer as LLM influencer propaganda.
Maybe next time write with more conviction instead of doing the "But it all feels... " shy 14 year old attitude in an attempt to be "neutral" and "mature" in your own confused words.
Shill AI less and either learn to code or fuck off cause I will personally poison your drink and every brown's drink who's sabotaging civilization by faking it """"till they make it"""""you'll never make it, you'll crash and burn like you did for millions of years since Pakistan & India and I'm telling you to stay away from us just like China & India have always been separated.
>We still require a code review , by whom, by which standard? By codemonkeys who don't understand the code and soon enough by AI judging other AI's code cause you're that lazy, incompetent and irresponsible to push blind guesswork into something beyond a hobby project, something as serious as fucking security and even vehicle AI.
It’s a tool and the good old sh* in sh* out principle applies.
People might take Mitchell’s comment as some kind of anti-AI stance, but it’s not he uses it regularly and makes a point in the X comments: “use AI, but think”
That comment sums it up best, because right now it’s hard to talk to either side, which separates at the comma.
I’m also in a big tech company and a lot of the team hasn’t written any lines of code by hand for awhile and it’s causing a whole lot of tech debt and frustrations are beginning to boil.
I’m not sure it’s possible to force someone to read every line of AI generated code and understand it. People generate code faster than they take time to read it.
Pressure from C-suite to AI AI AI AI AI MORE AI AI AI AI doesn’t help.
What programming language are you using? It seems like some programming languages are more mature in LLMs, e.g., Python, Java, C#, maybe Golang. (Oh yeah, and definitely JavaScript/TypeScript.) Rust, Zig, C++: I have a harder time believing you can manage a large project using only an LLM to write code.
And to answer your question: No. I am yet to see a product made by AI or a product that used to require a dozen engineer and a few years being made by a single engineer in a month. Anything demoed is always a UI/functionality clone of the same thing LLMs regurgitates.
It's causing problems in all parts of the business and leadership's answer is that we must use AI to make fixing incidents faster and automated rather than assess whether we should be shipping enormous amounts of buggy code every day...
Most of our time is spent doing spec work, planning, and injecting the proper context into LLMs. Like the OP, our metrics have drastically improved the time for delivery of new features, slightly improved bug resolution times, and now we're bottlenecked by needing more code review and manual QA to handle the workload.
In my world, that is far too slow, and you will be seen as a low performer who just can't keep up with the tech.
And how many lines of Markdown have you written? Pointless metric. I think I type more now because I don't get any helpful autocomplete for... English.
What's the difference? I don't think anybody get paid by how efficiently they type on a keyboard. If you to use a die or raise a crow to get your next keypress I honestly don't think your PM cares as long as the actual output you contribute to the project is something you are responsible for.
I'm not saying it has no implications on how you think or no costs socially, ecologically, politically, solely that nobody cares HOW you get the code, only in your ability to keep on making it increasingly work better, closer to the evolving needs of the project.
When you work on just a new mobile app, this is where I find AI is making the biggest difference.
On mobile you don't need specs and you don't need to understand every detail of the implementation. You can QA test the app on a real device. It gives me more confidence than just having written the code myself, and it's much faster. You can implement multiple major features in a single day.
This kind of e2e testing is just not possible with backend services.
Other programmers are painters. Their job is to start with a blank canvas and create something that others will value. When AI tries to paint, it tends to produce slop: a facsimile of everything it's ever seen.
Without any human code to grab on to, AI has a habit of writing code that is pervasively low quality and rife with misunderstandings such that it always needs to be thrown out.
And yes with considerable prompting effort you can improve this picture. But it's easier, faster and cheaper to just write the code yourself. Code is the best specification language we have.
AI is much faster at taking an idea and creating a working proof of concept than any human I've seen.
Not saying it's good engineering, but leave that to the gardeners.
Are other bots upvoting this?
Our experience is very similar except we didn't really have a review process before, and now LLMs find bugs before PRs get merged in main.
We had 5x-100x speedups in some legacy but important pipelines, with no regressions (validated after extensively by humans). It's not that the code was actively bad. It's just only 1-5% people in the local SWE market would be able to write code that runs so fast and efficient and benchmark it correctly.
We found a subtle correctness bug that was in production for half of the decade (both GPT-5 and Claude Opus were able to find it), confirmed by human after.
And we keep finding subtle bugs that have been introduced by humans before (despite the human reviews, the particular domain is just difficult no matter how many docs and comments and tests one writes)
Machines, OTOH, are very good at it. I am currently trying to make the code review experience better for humans by not just having the AI review the code, but interact with the human, pointing out potential problems, bad patterns, perhaps hiding some code (e.g. renamings, formatting changes).
Developers still want to review the code, despite provably being bad at spotting bugs, because they want to actually keep knowledge of what's being modified in the code base, so I think this is the best approach.
I've addressed why in a response to a similar claim here: https://news.ycombinator.com/item?id=48157898
It’s not all useless but most of the days I think I would be more productive if some processes were streamlined rather than if I had to throw tokens at them and still fail.
Of all the showcases I’ve seen the best are the ones written by people assuming that the token bonanza will not last so they used AI to build tools they wished they had. AI used to build the tool but by no means used by the tool, so if/when token quota gets reduced we still have a functional tool.
Leadership is not being dumb, at least on this topic. If your token usage is that low, you just aren't using AI that much (even if you think you are.)
"If you aren't donating at least your salary's worth of company money to another company every day, are you even working?"
Only by walking us into some revenue or customer impacting failure - through inappropriately having junior devs doing senior level things - will some sense of sanity start to prevail again.
Right know, prompters are setting up whole company infrastructure. I personally know one. He migrated the companies database to a newer Postgres version. He was successful in the end, but I was gnawing my teeth when he described every step of the process.
It sounded like "And then, I poured gasoline on the servers while smoking a cigarette. But don't worry, I found a fire extinguisher in the basement. The gauge says it's empty, but I can still hear some liquid when I shake it..."
If he leaves the company, they will need an even more confident prompter to maintain their DB infrastructure.
I have seen people write highly complex code where all the complexity was not necessary. Think: deep unnecessary branching, pointless error handling and retries which make no sense in our context, hand-coded parsing using regexps, haphazard data flow, functions which seem purely computational but slyly make API calls, pointlessly nullable model fields, verbose doc comments which describe the implementation instead of the contract. I could go on.
The worst part is, even when "prompted" by bad coders, it works in the end. Even has tests (ostensibly mock-ridden, a pet peeve of mine which always falls on deaf ears). So I cannot reject the PR without being an asshole.
I am no luddite. I make heavy use of AI, with all the skills / AGENTS.md / style guides and clear specs, then review every line of code, prefer testing with minimal mocking. I'd even say with right prompting, it can write better low level code than me (eg: anticipating common error conditions).
But my biggest fear about AI is how it enables normies with little to no understanding of CS principles to produce code faster which looks correct but slowly poisons the codebase.
Talking to him, he told me he couldn’t even reverse a string. He is at once many times more valuable than ever before to his company, but also far more dangerous than ever before.
Oh man, I think you may have touched the third rail here.
My first job out of high school was as an AutoCAD/network admin at a large Civil & Structural firm. I later got further into tech, but after my initial experience with real Engineering, "software engineering" always made my eyes roll. Without real enforced standards, without consequences, it's been vibe engineering the whole time.
In Civil, Structural, and many other fields, Engineers have a path to Professional Engineer. That PE stamp means that you suffer actual legal consequences if you are found guilty of gross negligence in your field. This is why Engineering firms are a collective of actual Professional Engineer partners, and not your average corporate structure.
The issue is that in software dev, we move fast, SOC2 is screenshot theater, and actual Engineering would slow things way down. But, now that coding is fast, maybe you are correct! Maybe vibe coding is the forcing function for actual Software Engineering!
___
edit: I just searched to see if my comment was correct, and it turns out that Software PE was attempted! It was discontinued due to low participation.
> NCEES will discontinue the Principles and Practice of Engineering (PE) Software Engineering exam after the April 2019 exam administration. Since the original offering in 2013, the exam has been administered five times, with a total population of 81 candidates.
https://ncees.org/ncees-discontinuing-pe-software-engineerin...
This was something I noticed in my early career in mechanical engineering and later doing PCB design and software for robotics. It’s easy to find firms that just need adequate parts without the professional certifications or ass-covering calculations of other engineering fields.
All this to say, it’s not just software versus the rest of them. From my position, civil and aerospace seemed more like the exception while much of the rest of the engineering world is more vibes based.
I hope that this becomes a thing in Software Engineering.
I think it’ll be the opposite. Maybe it’ll be what will eventually cement the field as “talent” based field. Just like it was difficult to quantify what makes a flute player better than another, how good your are at endlessly prompting a blackbox machine would be the only measure. The engineers of ol’ whoe developed kernels and drivers would be thought of as the “crazy people who put the flute against their temple to tune it” LOL. we don’t need people like that. You can just buy a flute tuning device. who gives a fuck? Can you make the next “Shake it, Shake it”?
So it sounds like it was fine? Why would this prompt (haha) a change in their approach to things?
That’s basically every M2, and many if not most M1s, in the last 10 years. So fuck it. Why does any of it matters?
I think Mitchell's point is well taken -- it's possible for these tools to introduce rotten foundations that will only be found out later when the whole structure collapsed. I don't want to be in the position of being on the hook when that happens and not having the deep understanding of the code base that I used to.
But humans have introduced subtle yet catastrophic bugs into code forever too... A lot of this feels like an open empirical question. Will we see many systems collapse in horrifying ways that they uniquely didn't before? Maybe some, but will we also not learn that we need to shift more to specification and validation? Idk, it just seems to me like this style of building systems is inevitable even as there may be some bumps along the way.
I feel like many in the anti camp have their own kind of reactionary psychosis. I want nothing to do with AI but I also can't deny my experience of using these tools. I wish there were more venues for this kind of realist but negative discussion of AI. Mitchell is a great dev for this reason.
So now the AIs will do more of that, at superhuman speed.
> will we also not learn that we need to shift more to specification and validation
We'll just quickly learn what we've been trying to do for decades, while also treading water in floods of more code than has ever been written before? And some of the motivations to write correct code are being deflated - "just vibecode it again and see if the bugs disappear, it only took a week and $200."
There are people who write important software that the world runs on, but they do it outside the 'industry'.
A real industry should be responsive to events of nature, or at least the market, not vibes.
Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable. It will become a special kind of process to clean room out such a mess and rebuild it fresh (probably still with AI) after distilling out core design principles to avoid catastrophic breakdown.
Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first, place but it will take us 20 years to learn them, just like original software eng took a lot longer than expected to reach a stable set of design principles (and people still argue about them!).
People really have a misconception about the sums of money that companies operate on on a regular basis. If you are a people person and know essentially how to sell yourself, you can "scrape" money on the fact that nobody is going to look or think too hard about some contract that represents a tiny fraction of the years budget.
The reason Oracle can continue failing at those massive projects is simple: everyone fails at them routinely and often it’s the customers fault.
It's just an umbrella term for "weak process glue code".
it will kill all the people in that hospital too
> On January 3, 2022, the jury found Holmes guilty on four of the seven counts related to defrauding investors: three counts of wire fraud, and one of conspiracy to commit wire fraud. She was found not guilty on four counts related to defrauding patients
What do you think the fake Delve attestation scandal was about? https://news.ycombinator.com/item?id=47444319
(Screams in "deployed in 2026 a new product that only works in internet explorer" in healthcare).
Definitely cleaning up other people's AI mess for them for free is not a good use of time.
I think the problem will get worst. I dislike the marketing around AI, but I do think it is a useful tool to help those who have experience move faster. If you are not an expert, AI seems to create a complex solution to whatever it is you were trying to do.
I've been watching non-developers vibe code stuff, and the general failure mode seems to be ignorance of 3-pick-2 tradeoffs.
They'll spam "make it more reliable" or some such, and AI will best-effort add more intermediary redis caches or similar patterns.
But because the vibe coders don't actually know what a redis cache is or how it works, they'll never make the architectural trade-offs to truly fix things.
I often wonder if it’s the statistical nature of the LLM mixed with a request in the prompt.
“ These are highly complicated pieces of equipment… almost as complicated as living organisms.
In some cases, they’ve been designed by other computers.
We don’t know exactly how they work.”
Now how did that work out ;-)
I think it will be needless verbose complexity.
I kind of imagine someone having an unlimited budget of free amazon stuff shipped to their house.
In theory, they are living a prosperous life of plenty.
In reality, they will be drowning in something that isn't prosperity.
The explanation, in turn, can be fed back to recreate the functionality of the original code.
At that point, why care about the code at all? If it works, it works. If it doesn't, tell the model to fix it. You did ask for tests, right?
That is where we're indisputably headed. It's not quite a lossless loop yet, but those who say it won't or can't happen bear a heavy burden of proof.
On one end, you have code that can perform only the behaviour explicitly declared in the spec, but has to be thrown away and rewritten for any new or updated spec.
On the other end, you have code that implements or anticipates a wide range of future possible specs including the given one.
The AI can operate on any point on this spectrum, but it's not very good at choosing. The more complex the software, the more such choices need to be made.
When the number of bad choices reaches a certain critical mass, even a skilled engineer becomes powerless to undo all the bad choices, and even a powerful model becomes unable to reduce it back to a coherent spec.
It is now, and vice versa. Deal with it.
Some people are mindful about what they get and don't get from amazon and don't die from prosperity. ("you might use AI to increase your prosperity")
the rest of the world eats too much and dies of heart disease/diabetes. ("the rest of the world will flounder more and AI will do more stuff to them than for them")
The issues have all been structural, not local. It's easier to treat it like a rewrite using the original as a super detailed product spec. Working on the existing codebase works, but you have to aggressively modularize everything anyway to untangle it rather than attack it from the top down.
All of these projects have gone well, but I haven't run into a case where a feature they thought was implemented isn't possible. That will happen eventually.
It's honestly good, quick work as a contractor. But I do hope they invest in building expertise from that point rather than treating it like a stable base to continue vibecoding on.
The greatest asset in this type of work is genuinely liking people, being good at what you do, and keeping in touch. My email is easily findable for a reason.
Here’s a slightly different future - these AI rescue consultants are bots too, just trained for this purpose.
Plausible?
I have already experienced claude 4.7 handle pretty complex refactors without issues. Scale and correctness aren’t even 1% of the issue it was last year. You just have to get the high level design right, or explicitly ask it critique your design before building it.
Do you think people are not giving their agents specs and asking for input?
Commits, design reviews, whitepapers, code reviews, test suites. And pretty concerning : chat logs and even keystrokes from employees nowadays.
The way we train specialized bots now is incredibly inefficient, that part is rapidly improving.
That's serious levels of circular thinking right there.
We train humans to do things untrained humans can not do.
- AI Hype
- AI Psychosis
- AI keeps getting better and better until it can work around big AI slop code bases
I instructed it to split it up anyway, yet I wonder how often the concerns around the mess are imaginative rather than practical.
The belief in this is a form of AI psychosis, I think.
Maybe in the future but certainly no evidence of this anytime soon
Here's some anecdotal evidence from me - I cleaned up multiple GPT 4.x era vibecoded projects recently with the latest claude model and integrated one of those into a fairly large open source codebase.
This is something AI completely failed at last year.
Maybe you should try something like this or listen to success stories before claiming 'certainly no evidence' in future?
I don't know what happens in a decade when there are no junior engineers, skilled senior engineers are becoming rare, and the only data left the train LLMs on is 200th-generation slop. But AI slop being qualitatively slop is not enough of a obstacle to prevent that future from coming to pass. And billions of dollars will be "saved" along the way.
1) same business logic implemented in two different places, with extra code to sync between them
2) fixing apparently simple bugs results in lots of new code being written
It’s a sign I need to at least temporarily dedicate more effort to overseeing work in that area.
I somewhat agree with the AI psychosis framing of the OP. It takes some taste and discipline to avoid letting things dissolve into complete slop.
What evidence is there that we're not at or close to a plateau of what LLMs are capable of? How do you know the growth rate from 2023 to present will continue into 2029? eg. Is it more training data? More GPUs? What if we're kind of reaching the limits of those things already?
The (leading) LLMs work by consensus, like Wikipedia, Openstreetmap, web search engine or opensource movement.
What I mean is if I ask LLM "create a linked list", its understanding (of what I want) is already close to the expected ideal. Just like Wikipedia article on linked list, for example.
But the LLMs will continue to improve in breath and depth of understanding the world, although technically (what they CAN do) they probably already peaked. Similarly, OSS movement technically peaked in the 90s with the creation of compiler, operating system and a database; doesn't mean that new opensource isn't being created.
LLMs (or specifically GPT algorithm) are 8 years old. It has matured as a technology. I am not sure how you imagine it being significantly improved, from a user point of view, without some kind of paradigm shift (i.e. something significantly different from GPT or LLM).
Although I can imagine one important social innovation yet to come - a generally available big public LLM, that "anybody can train". We had a technology of "encyclopedia" for years (famously Brittanica); yet the concept of Wikipedia has been a truly new take on encyclopedia.
Also, new kinds of AI might emerge - for example we might formalize all types of human reasoning and build a reasoning AI, as well a model of human language, from scratch rather by training via GPT (and thus, more understandable and potentially smaller). But that won't be an LLM.
I proposed how. New harness techniques and new training data/techniques, so the harness gets better and the LLM can be trained to work better with the harness. There's no reason to believe we're out of momentum for improvement in that direction.
However, they also make mistakes like humans, I don't think a better harness or better training will fix that, because fundamentally, they cannot read your mind, if you put in an ambiguous prompt.
I like to compare the process of turning inexact text to formal language to an error-correcting code. If you haven't made too much mistakes or have been precise in the specification, it will self-correct and do what you want. But if your input is too ambiguous, it will never do exactly what you want, but something close to it. And people (who are using AI) are still learning where is the boundary and how to tell.
The companies building these models are training them to react to typical expectations. If you have some special need, you will always have to tell the model, otherwise it will not know your exact context. And the harnesses have many tools for that or try to do that automatically already.
I don't see why we would assume that we are at a plateau for RL. In many other settings, Go for instance, RL continues to scale until you reach compute limits. Some things are more easily RL'd than others, but ultimately this largely unlocks data. We are not yet compute/energy/physical world constrained. I think you would start observing clear changes in the world around you before that becomes a true bottleneck. Regardless, currently the vast majority of compute is used for inference not training so the compute overhang is large.
Assuming that we plateau at {insert current moment} seems wishful and I've already had this conversation any number of times on this exact forum at every level of capability [3.5, 4, o1, o3, 4.6/5.5, mythos] from Nov 2022 onwards.
And the answer appears to be that the improvement is accelerating. So how could it be stopping?
https://metr.org/time-horizons/
I don’t think that the current AI paradigm has infinite headroom for improvement, similar to how every other AI approach before it eventually hit a limit.
And the link I posted shows the amount of work a query can do increasing non linearly. You can explore the site for more detail and a graph that shows error rates getting halved every couple of months.
No one said anything about infinite. It doesn't mean we don't have headroom to spare.
Software itself took 80-120 years to get where it is today depending on how you count. Time is on AIs side here.
* A belief that AI will keep getting better, presented without evidence, does not yield a lot of skepticism around these parts.
* Your comment saying it is wrong to believe AI will keep getting better, also presented without evidence, is downvoted.
You have not seen the spreadsheets that accounts run the firm on.
Bloody kids!
Are you sure about this? Yes, there is a stable set, but they are used in all of the wrong places, particularly in places where they don't belong because juniors and now AIs can recite them and want to use them everywhere. That's not even discussing whether the stable set itself is correct or not - it's dubious at this point.
https://www.hypercubic.ai/hopper
But won’t those more complex systems presumably solve more complex problems than the systems that humans could build? Or within a comparable time?
I think it is reasonably safe to assume at this point in the game that these AI systems are increasingly able to reason rigorously about novel problems presented to them, of ever increasing complexity and sophistication.
I exaggerate only a little.
In their current forms, it's unlikely for a product that actually needs to work.
It's not getting that complex and working with current LLMs.
I thought the same when I saw development outsourced to Indians that struggled to write a for loop.
I was wrong.
It turns out that customers will keep doubling down on mistakes until they’re out of funds, and then they’ll hire the cheapest consultants they can find to fix the mess with whatever spare change they can find under the couch cushions.
Source: being called in with a one week time budget to fix a mess built up over years and millions of dollars.
Ultimately, if you want to move fast, it's better just to have one engineer vibe coding something. but, that engineer is under so much pressure. Now he's got a legacy mode and another legacy mode because the requirements keep changing. And now there's a deadline in four weeks.
This all could work just fine, but the ungodly amount of attention that this world is getting puts too many cooks in the kitchen, which is always a recipe for disaster.
[0] https://news.ycombinator.com/item?id=48037128#48038639
[1] https://en.wikipedia.org/wiki/Peter_principle
(None of above is theoretical)
Imagine the year is 1995, C exists, but some guy out there is working on essentially what modern Python is. He says to you "check out this language, you can just import stuff, and use it and dynamically modify anything at run time". You can probably come up with hundreds of arguments about things that could go wrong, like memory clean up, threading, e.t.c, but turns out, incrementally, they were all solved and we have the modern Python that basically is good enough to build these large LLM models.
Now imagine modern programming and computing is what C was back in 1995, and AI use is that guy building the Python code.
Also, Python does not build or run large language models. It orchestrates C code that does that, and it was probably good enough to do that in 1998.
I think you have some serious misunderstanding here.
It doesn't know what mess you want to clean up. A lot of times AI just starts making up new patterns on top of other patterns and having backwards compatibility between the two. How does it know which one you actually like?
Violets are blue
AI is great
And so are you
Wow, it’s true, AI really is set to match human performance on large, complex software systems! ;)
https://www.joelonsoftware.com/2000/04/06/things-you-should-...
A decade ago, I was sitting in on a meeting about a rewrite and, before I could say anything, someone in the first year of her career asked why anyone thought a rewrite would be any cleaner once all the edge cases were handled. Afterwards, I asked her where she learned this. She said "I don't know, it just seems kind of obvious." She went on to be a great engineer and is now a great manager.
Greenfield guy comes in, promises the world, and starts from some first principles white papered architecture. It's really lovely until they onboard the first user. Then they slowly commit all the "sins" (features that drive revenue) of the first system.
The firm is stuck supporting N systems indefinitely because the perfect new system takes so long to cover even 30% of the original system use cases, that management takes a flier on.. bear with me.. a second rewrite. Now they have 3 systems.
I've seen more 3rd systems than I've seen actual decommissioning of original systems into a single clean new system.
The answer is chipping away, modularizing, and replacing piecemeal Ship of Theseus style. But that does not drive big hires and big promotions.
Including all of the above.
Do they??
My team lead has worked on the same software for 30 years. He has the ability to hear me discuss a bug I noticed, and then pinpoint not only the likely culprit, but the exact function that's causing it.
And with one you need to train a guy for 25 years and with the other you need plan mode for a few minutes and then it runs 24/7.
And the equivalent for software. It’s usable, intuitive, responsive, stats up and running, and doesn’t leak my private data.
Then the only "experts" (not even close, just a guy with a form and some technical training) are the building inspectors who come at the end to verify if some stuff is done up to code.
Other than the original architect who draw the plans that got used for many buildings and the electrical engineer that cleared the electrical, no experts were involved. This is basically how the whole city and most of the country was built.
There's no expert mason or painter or whatever involved. Just a dude that can hold a paint roller. That's the same as going from a craftsman programmer to some dude with claude. Individual quality goes down, but more importantly price goes down way more and so many more people get access to much better quality than having nothing.
There is a lot of absurdly complex software that runs with high reliability. We hear a lot about the ones that don’t.
I have really tried as an "old" person in the field to try and pass on the stuff I've learned, but "craft" and such really has absolutely no home in modern dev culture. The people who care about history, the craft, etc. are increasingly rare.
Younger implies cheaper.
maybe some that people said were that bad. but they just needed some elbow grease. remember, it takes guts to be amazing!
It's really nowhere near as complicated as making distributed systems reliable. It's really quite simple: read a fucking book.
Well, actually read a lot of books. And write a lot of software. And read a lot of software. And do your goddamn job, engineer. Be honest about what you know, what you know you don't know, and what you urgently need to find out next.
There is no magic. Hard work is hard. If you don't like it get the fuck out of this profession and find a different one to ruin.
We all need to get a hell of a lot more hostile and unwelcoming towards these lazy assholes.
Scrape off all the soil, put it in casks, and bury it in a concrete bunker for 10000 years. Then relocate everyone and attempt to rebuild.
We didn't create the dna we rely on to produce food and lumber, we just set up the conditions and hope the process produces something we want instead of deleting all the bannannas.
Farming is a fine an honorable and valuable function for society, but I have no interest in being a farmer. I build things, I don't plant seeds and pray to the gods and hope they grow into something I want.
If the farming situation were as dire as you seem to suggest, we'd have unpredictable famines all the time, but we don't
Planting is merely setting up the conditions. We didn't write the dna, we couldn't write the dna if we wanted to because we are an infinity away from understanding all the actual processes that descend from the dna. And when we utilize the dna that we simply found and didn't and couln't hope to write, it's always, at best, a case of hoping it goes right again this time.
Even when it works, even if you put in a lot of work and experience and understanding, it still just worked by itself and it's just good luck every time.
You have also guessed incorrectly.
It's a tool; not the second coming.
plot twist: it's Starbuck
I work at a hosting provider that has pretty conservative customers who don't want to host on AWS/Azure due to data privacy / safety concerns, among other things.
For us, sending customer data to the US is a big no-go.
We have been experimenting with LLM usage, first through a Gemini subscription, then also with the Claude API. Participation has been lightly encouraged by management. As for coding, we haven't let the LLMs loose on our core components, but tooling on the fringes (like deployment scripts, reporting) has seen some uptick in LLM usage.
We have also started building an on-premise inference cluster, which is in alpha testing, and where the "don't include customer data" restriction doesn't apply anymore.
This is not a mystery
Management is really pushing AI. It's obnoxious, and their idea on how it fits into my team's job specifically is completely, hilariously detached from reality. On the off chance someone says something reasonable, unless it fits the mold, it's immediately discarded. The mold being "spec driven development". We're not even a product team for crying out loud. I straight up started skipping these meetings for the sake of my sanity. It's mindwash, and it's genuinely dizzying. The other reason I stopped attending is because it ironically makes me more disinterested in AI, which I consider to be against my personal interests on the long run overall.
On the flipside, I love using Claude (in moderation). It keeps pulling off several very nice things, some of which Mitchell touched on in this post (the last one):
- I write scripts and automation from time to time; Claude fleshes them out way better with way more safety features, feature flags, and logging than I'd otherwise have capacity to spend time on
- Claude catches missed refactors and preexisting defects, and does a generally solid pass checking for defects as a whole
- Claude routinely helps with doing things I'd basically never be able to justify spending time on. Yesterday, I one-shotted an entire utility application with a GUI to boot, and it worked first try; I was beyond impressed.
- Claude helped me and a colleague do some partisan cross-team investigation in secret. We're migrating <thing> and we were evaluating <differences>. There was a lot of them. Management was in a limbo, unsure what to do, flip-flopping between bad options. In a desperate moment, I figured, hey, we kinda have a thing now for investigating an inhuman amount of stuff in detail - so I've put together a care package for my colleague with all our code, a bunch of context, a capture of all the input data for the past one week, and all the logs generated. Colleague put his team's side of the story next to it, and with the help of Claude, did some extremely nice cross-functional investigation. Over the course of a few weeks, he was able to confirm like a dozen showstopper bugs, many of which would have been absolutely fiendish if not impossible to fix (or even catch) if we went live without knowing about them. One even culminated in a whole-ass solution re-architecturing. We essentially tore down a silo wall with Claude's help in doing this.
So ultimately, it really is a mixed bag, with some really deep lowpoints and some really nice higlights. I also just generally find it weird that a technical tool [category] is being pushed down people's throats with a technical reasoning, but by management. One would think this goes bottom up, or is at least a lot more exploratory. The frenzy is real.
Well, now you must to work with a confusing tool which slows you down. You are not allowed to use claude directly anymore, because someone heard that mythos is really bad for security. But hey, the tool integrates well with Jira!
You hate every second working with this thing. All the joy you had with explorative coding is forever gone, which was the sole reason you entered this field.
Deep inside you know that you can't change your job, because every other employer will cut its workforce as AI removes all manual labor of a software engineer and reduces risk to a minimum.
Oh, now we can finally move all those jobs to india without risk and shareholders will love it! How awesome is that! Wait, do we still need that guy in cubicle 42, who bitches and moans about AI every day? Nah...
Show HN here: https://news.ycombinator.com/item?id=48151287
I'm afraid to say this out loud internally because I'm afraid of the next round of layoffs and I want to keep my job. So I just keep on shipping at a high pace, building massive cognitive debt and hoping the agents will get so good in near future, that there won't be the need for understanding the codebase.
Agents might get better. But who will own the code and take responsibility for it? The AI agent? The company who created the AI agent?
If e.g. a car crashes and does not deploy its airbags because the AI agent made a mistake in the airbag code, will the manufacturer be able to shift the blame to OpenAI or Anthropic?
I do not think so.
And therefore I believe that no matter how good the AI agents will ever become, the ultimate responsibility for the code will always remain with the companies that create the code. Regardless of which AI tools they use.
I see no other way to bear that responsibility by the company than to have people internally who will be responsible. And those people, if they actually want to own that responsibility, would need to understand that code themselves, in my opinion. Because relying on a non-deterministic AI agent's vetting is fundamentally unreliable, in my opinion.
I think it was just text templates being used by some support staff.
And we do not get even get into potential adversarial tactics. If you have no morals what is better than using agents to flood your competitor with fake bug reports.
I guess what I relate to the most is how dismissive people get about real software engineering work.
I may have skill issues, but I am yet to reach the level of autonomous engineering people tend to expect out of AI these days.
I use AI coding tools every day, but AI tools have no concept of the future.
The selfish thinking that an engineer has when they think "If this breaks in prod, I won't be able to fix it. And they'll page me at 3AM" we've relied on to build stable systems.
The general laziness of looking for a perfect library on CPAN so that I don't have to do this work (often taking longer to not find a library than writing it by hand).
Have written thousands of lines of code with AI tool which ended up in prod and mostly it feels natural, because since 2017 I've been telling people to write code instead of typing it all on my own & setting up pitfalls to catch bad code in testing.
But one thing it doesn't do is "write less code"[1].
[1] - https://xcancel.com/t3rmin4t0r/status/2019277780517781522/
Maybe it's just my prompt or something but my coding agent (Opus 4.7 based) says things like "this is the kind of thing that will blow up at 2am six months from now" all the time.
Even before LLMs generating entire programs, complex frameworks allowed developers to write the initial versions of programs very quickly, but at the cost of being hard to understand and thus hard to debug or modify.
Some of us are betting that the AIs will always be smart enough to debug, maintain and modify the programs written by AI, no matter how convoluted or complex. I’m not so sure.
What's the historical context for this MTBF vs. MTTR reckoning?
If you optimize for MTTR, you don't care how often you go down and instead optimize your recovery time to be as short as possible.
The concepts are pre-computing.
Current (and by current I mean the last 4-5 years) they only cared about MTTR. That was probably the only metric they measured and cared about. When a system went down it fired an LSI “Live Site Incident” (as opposed to a CRI “Customer Reported Incident”). At the time you grilled your team. Eventually you come to the conclusion that an LSI should only be measured by MTTR. MTBF is meaningless because MTBF limits your “ship new features” velocity.
You might scoff at GitHub and “ship a new feature” concept in the last 5 years, but if you’re an enterprise customer you’d know how much nonesense they shoveled out in the last 5 years. Absolute insanity of “what the fuck” type feature because customer X who is paying $$$ is asking for it type features.
MTTR = optimize the ability to correct failures when they occur.
He's describing leaders who believe quality no longer matters because any faults or deviations can be corrected so quickly that it doesn't make any sense to waste time on quality.
- What alerts are we missing that could have helped us catch that earlier?
- What dashboards could we have had to help diagnose the issue quicker?
- What Ops tools could we have had to help mitigate such issue quicker?
- What extra logging/metrics/telemetry could we add to help us catch this quicker?
- What “safe deployment practices” could we have employed to avoid/improve this?
- what processes could we enforce to facilitate all of that?
Rinse and repeat that few hundreds or thousands of times while mounting MTTR KPI and you will see that number improve. Most likely through your team “gaming it”
MTBF is much, much, tricker to measure or “manage out”. It’s about “excellence in engineering” which is not measurable nor controllable. You want a random feature X. Your team tells you it’s really not how the system works, and they want few months making the change slowly while observing the system. But you don’t want just X, you want X, Y, Z, W, V, Q, A, B, C, D, all the way throw AAZZW12. So you tell the team to go fuck itself.
John Allspaw (previously CTO at Etsy) has written about this: https://www.kitchensoap.com/2010/11/07/mttr-mtbf-for-most-ty...
I really do worry - I especially worry about security. You thought supply chain security management was an impossible task with NPM? Let me introduce to AI - you can look forward to the days of AI poisoning where AIs will infiltrate, exfiltrate, or just destroy and there's no way of stopping it because you cannot examine the internals of the system.
AI has turbo charged people's lax attitude to security.
God help us.
Some time down the line, I discover CPU being maxed out, which is showing up in degraded performance in other parts of the system. I investigate, and I trace the issue to a boneheaded busy loop in this library that no human with the domain expertise to implement the library would have written. Turns out I'd missed one deeply-buried mention in the README that maintenance was being done via AI now, and basically the whole library had been rewritten from the ground up from the reliable tool it used to be to a vibecoded imitation.
Yeah, yeah, sure, bad libraries existed before all this. But there used to be signals you picked up on to filter the gold from the dreck. Those signals don't work anymore.
I am watching a 10 person company try to run 3 different AI initiatives in parallel. Everyone wants to be "the guy" on this one. I cannot imagine there will ever be a bigger opportunity to ego trip as a technology person. This is it. This is the last call before it's all over. There are many businesses out there that are beyond traumatized by human developers taking them on bad rides. The microsecond they think this stuff will work they are going to fire everyone.
The psychosis comes from the tension here. We effectively have The Empire vs the rebel alliance now. I know how the movies go, but in real life I think I'd rather be working on the Death Star than anywhere else.
They're also reportedly now giving staff AI-related "homework" in an attempt to force staff to use AI more.
- Nuggesting improvements to the code after finishing the task you gave it, very irritating when the improvements were obvious and the ai didn't implement them on its own
- Not trying very hard when implementing something, leading to bugs, which leads to more tokens used (this behavior can be incentivized and learned with RL)
Since its a known fact if a user continues a session after the LLM says something, its not hard to train against this. The least efficient way to do this would be to GPRO directly against the user base and try to get as many people talking to the AI, and with OAI having a billion monthly active users the least efficient method would work really well for them.
Sure there are industry changing things going on. What if you're working on an app thats a decade old and has had different teams of people, styles, frameworks (thanks to the JS-framework-a-week Resume Driven Development)? Some markdown docs and a loop of agents isn't going to help when humans have trouble understanding what the app does.
Maybe the problem is you, but you won't figure that out if you think the other person has psychosis.
For example, maybe you need to do a better job explaining, changing your language, simplifying things, being more concrete with consequences.
Or maybe you aren't understanding that the other person has different objectives/ loss function that makes them make seemingly weird conclusions.
It seems like he is pointing out that Ai will increase the complexity of a system oblivion, and that this is the discussion that can not be had.
Bit I am more than happy to talk about how I am using Ai to reduce complexity and remove architectural debt that I otherwise could not justify spending time on.
The question is: Will we live in the world of breathless re-implementation, new features every week, rebranding every quarter or will we eventually discover the value of stability, software that does its thing more or less optimally for decades?
Recent examples of things like curl or Firefox are interesting in that regard. Will we end up with a nearly perfect HTTP user agent and stick with it for decades?
Sounds like we prefer stability for stuff we use but not for stuff we sell.
Rewriting in rust does makes things faster but if an algorithm is O(n²), the improvement won't take us much farther.
Similarly with AI, if complexity is not structurally adressed, the velocity gains are but temporary.
I already took a couple of decisions. It will go wrong or well. But is was decided a year and a bit ago.
If you think the future will be different, stop doing the same you used to do the same way you used to do it.
My analysis is that the labour market will increasingly bargain salaries and will make pressure on you. So how safe is that compared to before? Maybe working for someone as an employed full time person is not the best thing you can do anymore.
Does using AI increase or lower that failure rate?
Does seeing a project that uses AI fail mean it wasn't going to fail if it didn't use AI?
To try to answer it with my gut: I imagine that we could see more projects failing, but the percentage that fail would be the same. Most projects that use AI will fail because most projects generally will fail, but the time and cost to get a successful project will lower.
at least at my BigCo, AI is being used for everything - writing slop, writing tests, code reviews, etc.
it would make sense to use AI for writing code, but human code review. or, human code, but AI test cases... or whatever combination of cross-checking, trust-but-verify, human in the loop, etc. people prefer.
i think once it gets used for everything, people have lost the plot, it's the inmates running the asylum.
"What's true about all bugs in production? (pause for dramatic effect) They all passed the tests!" (well, he said typechecker but I think the point stands)
Calling this "psychosis" is maybe a neologism but it's apt in perspective.
All that's actually new with "AI psychosis" is an acceleration of that phenomenon. The agents will summarize status faster than any middle manager. Claude will happily draw you any "up-and-to-the-right" graph you please, with the most common contemporary examples being "tokens burned" and "lines of code written". And vibe coding doesn't even require paying the cost of a mass layoff to get the "familiarity debt".
There have always been both good and bad engineering leaders. No tool will magically make a bad leader into a good leader overnight. There is nothing new under the sun.
I think the use of the word here is meant to invoke the vision of someone under heavy delusions or hallucinations, such as (what Hashimoto percieves as) the delusion that shipping more bugs is fine if AI can resolve them faster. To what extent this counts as delusion (and thereby psychosis) would depend on how deeply you believe that this and related opinions are wrong.
“very resilient catastrophe machine”
It is definitely factual that there is a complete paradigm shift in the prioritization of quality in software. It's beyond just AI side effects, and now its own stand alone thing.
There have always been many industries, companies, and products who are low on quality scale but so cheap that it makes good business sense, both for the producer and the consumer.
Definitely many companies are explicitly chosing this business strategy. Definitely also many companies that don't actually realize they are implicitly doing this.
Wether the market will accept the new software quality paradigm or not remains an open question.
Never mind code, what happens when the CEOs, or the investors, listen to the sycophantic voices of their LLMs?
I think it looks like every product becomes the next Juicero of its field.
Hmm, I agree with the point OP is making, but I'm not so sure this is the best supporting argument. The bottleneck is finding the bugs and if he'd criticized people saying AI will be the panacea to that I'd be with him, but people saying agents are fast and good at fixing human found bugs is nothing I'd object to.
Agents are fixing bugs so quickly and at a scale humans can't do already.
The metric is how many defects are introduced per defect fixed. Being fast is bad if this ratio is above one.
The fact that we can fix things faster now doesn't mean that we should throw away caution and prevention. The specific point of his tweet is that we're seeing a lot of people starting to skip proper release engineering.
Agents are quick to fix bugs, yes, but it doesn't mean that users will tolerate software that gets completely broken after each new feature is introduced and takes a certain number of days to heal each time.
This is an illusion, I assure you. On a side project of mine with behavior that's very hard to translate into an algorithm (never mind code), after a few failed attempts between the both of us, I figured it out. I gave the AI (Opus) an extremely specific algorithm with detailed tests. All completely and utterly ignored (including the tests), like I never even said it. It proudly declared the work done without ever having written the tests that would have proved that wrong - it basically wrote code that didn't change behavior at all, it just gave the illusion of looking busy.
That's just a single extreme example that comes to mind, but I've had it ignore me at least 4-5 times a day this week.
If you think agents are fixing things reliably then you simply haven't noticed that they are "looking busy."
Please don't sneer, including at the rest of the community.
Eschew flamebait.
https://news.ycombinator.com/newsguidelines.html
So the point is not that agents cannot find bugs (they certainly can), it's whether you can shirk reviewing for bugs if MTTR is fast enough. There are circumstances where YOLO is appropriate, but they aren't the production environment of a mature application.
What I wanted to say is that the particular people that think "its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!" are not the best argument for it.
But I won't die on this hill, maybe I'm just reading the sentence differently then others.
But this is just holding the Slop Companies to the standard they declared themselves! Just recently, the CEO of OpenAI babbled some nonsense on twitter about how he hands over tasks to Codex who according to him, finishes them flawlessly while he is playing with his kid outside.
> but soon we will be.
Ah yes, in the 3-6 months, right? This time next year Rodney, we'll be millionaires!
Eventually the companies that can't cope with undisciplined engineering will succumb to unacceptable reliability and be outcompeted, just like in the "move fast and break things" era.
Can someone please remind and refresh my memory what this whole debate was with what arguments?
Changing this focus is not easy but one thing that will usually do the trick is economic issues.
In other words; in order to get any serious consideration, something has to be broken.
AI is perfectly capable of doing this given enough time.
i don't have enough fingers (and toes) to count how many times i've demonstrated that "100% coverage" is almost universally bullshit.
Actually no, cancel that. I realise now that I trust AIs more than the average developer, period. At this point they do produce better code than most people I've dealt with.
and we all live in a green utopia of flying cars and peace upon the world.
I know which outcome I'd put my money on.
I don’t agree, but that’s the thinking
The AI tool isn’t wrong, our use of it is. See the glut of OpenClaw users effectively deploying it as a glorified linter and Stack Overflow copier but without actually creating the sort of reusable artifacts (or consumer spending from comparatively high wages) that approach yielded from human developers.
...and it also needs more so-called AI companies present in the wreckage in this crash.
AI psychosis is undeniably real.
At the end of the day robots can do the vast vast majority of jobs better and faster. If not now, very soon.
I only worry our economic systems won’t keep up
But I only see mass layoffs and those who are working - are working longer and harder then before.
Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions.
It is the opium of the people.”
Some are on copium, some on hopium. The gods change names; the need for relief remains.
I cautioned them that this a terrible idea -- you have business people who don't know what they're talking about, and all they know if "if we don't 'do AI' we'll be left behind because our competitors are 'doing AI'" (whatever tf "doing AI" means).
Yes, LLMs are a great tool. But they're not like some magic bullet you stick into everything. Use it where it makes sense, and treat it like you would other tools.
You make "doing AI" some kind of KPI in your org, and you're going to have people "doing AI" amazingly (LOC counts! tokens burned! tickets cleared!) while not actually being more productive, and potentially building something that is going to come down on your head for the next team to "clean up the AI mess".
I don't think it's super clear what we'll find out.
We've all built the moat of our careers out of our expertise.
It is also very possible that expertise will be rendered significantly less valuable as the models improve.
Nobody ever cared what the code looked like. They only ever cared if it solved their problem and it was bug free. Maybe everything falls apart, or maybe AI agents ship code that's good enough.
Given the state of the industry were clearly going to find out one way or the other, hah!
I think some companies will find out that their senior engineers were providing more value and software stability than they gave them credit for!
Corporate feedback loops are very slow though, partly because management don't like to admit mistakes, and partly because of false success reporting up the chain. I'd not be surprised if it takes 5 years or more before there is any recognition of harm being done by AI, and quiet reversion to practices that worked better.
If you're not doing AI there's an incredibly limited pool of people who will give you $$$ ... and you're competing with EVERY OTHER NON-AI COMPANY for their attention.
“It feels like entire companies are deluded into thinking they don’t need me, but they still need me. Help!”
The broad sentiment across statements of this “AI psychosis” type is clear, but I think the baseline reality is simpler. How can you be so certain it’s psychosis if you don’t know what will unfold? Might reaching for the premature certainty of making others wrong, satisfying that it might be to the ego, be simply a way to compensate the challenges of a changing work environment, and a substitute for actually considering the practical ways you could adapt to that? Might it not be more helpful and profitable to consider “how can I build windmills, ride this wave, and adapt to the changing market under this revolution” than soothing myself with the delusion that all these companies think they don’t need me now, but they’ll be sorry.
The developer role is changing, but it doesn’t have to be an existential crisis. Even though it may feel that way — but probably it’s gonna feel more that way the more you remain stuck in old patterns and over-certainty about how things are doesn’t help, (tho it may feel good). This is the time to be observant and curious and get ready to update your perspective.
You may hide from this broad take (that AI psychosis statements are cope) by retreating into specific nuance: “I didn’t mean it that way, you’re wrong. This is still valid.” But the vocabulary betrays motive. Resorting to clinical derogatory language like “AI psychosis” invokes a “superior expert judgment” frame immediately, and in zeitgeist context this is a big tell. It signifies a need to be right, anda deeply defensive pose rather than a clear assay of what’s real in a rapidly changing world. The anxiety driving the language speaks far louder than any technical pedantry used to justify it, and is the most important and IMO profitable thing to address.
You should not release a product into the market unless you have a good enough product that can keep you and your client compliant, safe and secure - including not leaking their customer info all over the place.
Prompt injection risk, etc. are massive for agentic AI without deterministic guardrails that actually work in practice.
Stop testing in production if you're shipping in a regulated industry. Ridic!
If you're not technical, you can get someone who is after signs of p-m fit, demos, but BEFORE deployment. This is common sense and best practices but startup bros dgaf because they're just good at sales and marketing & short term greedy.
Comical.
At the end of the day, we can only read so much and take on so much work before we bottleneck ourselves. Cognitive overload leads to burnout. Rumplestiltskin vibes with this AI stuff…
You first use the full words and then introduce the acronym that you're going to use in the rest of the text: "Mean Time Between Failures (MTBF) vs. Mean Time to Recovery (MTTR)".
With the latter, readers understand the term immediately, even if they don’t know the acronym. And they don't have to read these weird letters before getting the explanation.
Thankfully most of those things are a very small percent of my overall work.
If its a big percent of your work -> you are in trouble friend.
What's more, the only people they talk to about it are others at the same company. There is no external touchstone. There are power dynamics from hierarchy. No new ideas other than what is generated within the company. In other circumstances, this is a textbook environment for radicalization.
I would encourage all leadership to take a deep breath. You have time to think slow.
But in reality, anyone who knows their field and are going after certain specific issue, they will find soon how AI is nothing but an assistant, sure it can help and automate some stuff, but that’s it, you need to keep it leashed and laser focused on that specific issue. I personally tried all high end ones, and I found a common theme, they are designed to find a solution or an answer no matter what, even if that solution is a workaround built on top of workarounds, it’s like welding all sort of connections between A and B resulting in a fractal structure rather than just finding a straight path, if you keep it going and flowing on its own, the results are convoluted and way over complicated, and not the good complexity, the bad kind.
In all seriousness...well, yeah. AI is a monkey's paw, and that's how monkey paws work. So many movies and books warned us!
The only reason it worked has been expansive money policy and a larger share of the cost of goods being dumped into marketing value while manufacturing costs dropped abroad. so no one bothered to check.
There’s this delusion that if we somehow write enough tests that we’ll expunge every defect from software. It’s like everyone forgets that the halting problem exists.
Let them.
Many people on this forum are suffering under this same psychosis.
Worth also noting is that while there is plenty to criticize about AI use — especially any cultish behavior surrounding it — plenty of naïveté about the quality of its results, there is a also a strain of categorical opposition to it among some tech people that is equally off and that has all the hallmarks of the chickens coming home to roost.
For years, many in tech gladly “automated away” all sorts of jobs. Large salaries were showered on them for doing so, or at least promising to do so (there was and is plenty of bullshit here, too). Now, AI appears to threaten to derail the tech gravy train, especially for SWE work that’s run-of-the-mill (which is most of it). Now automation is bad. It’s a delicious juxtaposition.
I cannot deny the impact of AI for my daily tasks at this point.
But I just don't enjoy the field anymore. With increased productivity, also coming from my stellar coworkers, it feels like we're rat racing who outputs more.
The quality is good, and having very strong rails at language and implementation level, strong hygiene, etc helps tremendously.
But reality is that the pace of product vastly outpaces the pace at which I can absorb it's changes (I'm also in a very complex business logic field), and the same might be true about my understanding of the systems which are changing too fast for me to keep up.
I feel mentally fatigued from a long time, I don't enjoy coding no more bar the occasional relaxing personal project where I can spend the time I want without pressures on architectural or implementation details.
I'm increasingly thinking of changing field, this one is dying right under our eyes.
I often read comments about HN users still delving at their place with technical details or rewriting AI code to their liking.
I'm increasingly sure that these people live in happy bubbles where this luxury still exists. But this methodology of work is disappearing across the industry, team by team.
Of course SE will not disappear over night, but the productivity expectations, the complexity ballooning are raising the bar where only incredibly skilled and productive engineers will be still able to practice SE properly, and as long as they meet stakeholders expectations or keep living in those bubbles.
I'm trying so hard to pivot away because of this.
I am very close to using it as a pair programmer, but with me actually coding. I am just so tired of fixing its mistakes.
Probably from the EU because they seem to be the sane ones of this generation.
Have you ever been in an HN thread where you're an SME on the thread topic and just been horrified by the confidently incorrect nonsense 90% of the thread is throwing around? Welcome to the training set motherfuckers.
LLMs do the same thing for what should be obvious reasons. If you search things that have some depth and you know the answer you'll be flooded by how often the models will just vomit confident half truths and misrepresented facts. They're better than they used to be, not just lying whole cloth most of the time, but truth is an asymptotic thing, not an exponential one.
The groundwork for that was laid long ago with the idea of constant updates. It's been fine for years to ship bugs and rely on a rapid release cycle and constant pressure on users to upgrade everything all the time. To roll that back requires a lot more than toning down AI psychosis; it requires going back to a go-slow mindset where you actually don't release things until they're ready. It still needs to be done, but it's harder than just laying off the AI kool-aid.
AI coding swept over the software industry faster than most previous trends. OOP and its predecessor "structured programming" took a lot longer. Agile and XP got traction fairly quickly but still took longer than AI -- and met with much of the same kind of resistance and dire predictions of slop and incompetence.
AI tools have led to two parallel delusions: The one Mitchell Hashimoto describes, and the notion that we (programmers) knew how to produce solid, reliable, useful, maintainable code before AI slop came along. As always with tools that give newbs, juniors, managers some leverage (real or imagined) we -- programmers -- get upset and react to the threat with dire warnings. We talk about "technical debt" and "maintainability" and "scalability."
In fact the large majority of non-trivial software projects fail to even meet requirements, much less deliver maintainable code with no tech debt. Most programmers don't know how to write good code for any measure of "good." Our entire industry looks more like a decades-long study of the Dunning-Kruger effect than a rigorous engineering discipline. If we knew how to write reliable code with no tech debt we could teach that to LLMs, but instead we reliably get back the same kind of mediocre code the LLMs trained on (ours), only the LLMs piece it together faster than we can.
With 50 years in the business behind me, and several years of mocking and dismissing AI coding whenever someone brought it up, I got dragged into it by my employer. And then I saw that with guidance and a critical eye, reasonably good specs, guardrails, it performed just as well and sometimes more throroughly than me and almost all of the people I have worked with during my career. It writes better code and notices mistakes, regressions, edge cases better than I can (at least in any reasonable amount of time).
AI coding tools only have to perform better -- for whatever that means to an organization -- than the median programmers. If we set the bar at "perfect" they of course fail, but so do we. We always have. Right now almost all of the buggy, insecure, ugly, confusing software I use came from teams of human programmers who didn't use AI. That will quickly change and I can blame the bugs and crashes and data losses and downtime on AI, we all can, but let's not pretend we're really losing ground with these tools or that we could all, as an industry, do better than the LLMs, because all experience shows that we can't.
seems like it's working ideally to me!
the top reply is from someone doing exactly that, arguing "but the agents are so fast!"
Maybe they're assuming that doubling the code-base/features is more beneficial versus the damage from doubling the number of bugs... Well, at least for this quarter's news to investors...
The answer I got is "It's game theory. Someone will do it, and you'll be forced to do it, too. It can't be that bad".
I mean, yes, logic is useful, but ignorance of risks? Assuming that moving blazingly fast and pulverizing things will result in good eventually?
This AI thing is not progressing well. I don't like this.
Let's say I'm polar opposite of them, and we're on the same page with you.
The whole "you'll be forced to do it" comes from the alternative being that you lose. You no longer get to be a player in the "game". In the same way that coopers and cobblers are no longer a significant thing, but we still have barrels and we still have shoes. Software engineers who refuse to employ any LLMs won't be market competitive. If you adopt it, you at least get to remain playing the game until the game changes/corrects. That's the part that's "not so bad".
Choosing your own survival isn't ethically bankrupt.
Oof. Potential "bad" outcomes of "game theory" should be calibrated to include all the bloody wars and genocides throughout recorded history.
Why did the Foi-ites kill every man, woman and child of the conquered Bar-ite city? Because if they didn't, then they'd be at a disadvantage if the Bar-ites didn't reciprocate in the cities they conquered...
The problem was not him, but the fact that the number of people who thinks like him. They may word it in a more benign form, but the idea is the same.
So obsessed with being the first mover and winning the battle, never thinking whether they should, or what would happen with that scenario.
Missing the whole forest and beyond for a single branch of a single tree.
You'll be forced to do it, or lose. The unstated assumptions are that, first, it will work, and second, that you can't afford to lose. But let's just assume those for the sake of argument.
> It can't be that bad
That does not follow at all. It can in fact be that bad. That was what made the game theory of MAD different from the game theory of most other things.
Thanks. :)
i don't think it's 'our side' that has the psychosis.
A lot of companies have been under AI psychosis for years and will be forever.
A feature of psychosis is being unable to distinguish between external ideas and internal ones. For example, if a brown-nosing Yes-Man machine keeps reflecting your own leading questions back at you, laundering them into "independent" wisdom.
In contrast, I'm pretty sure COVID and the invasion of Ukraine are actual external phenomena that affect businesses and economies.
And also, he might not be right. But the good news is, we’ll all get to find out together!
https://psychiatryonline.org/doi/10.1176/appi.pn.2025.10.10....
Sorry, I don't buy your argument
But equally, like, do people need Terraform if they can just tell codex “put it live”, and does that hurt to see?
It all just feels like horse drawn carriage operators trying to convince automobile drivers to stop driving.
That was one doctor raising that as an issue, which was dispelled very quickly. It was not a wide-spread belief at any one point. Let's not bullshit ourselves and insult our own intelligence - the chatbots != intelligence.
Looking back and considering a technology or specific decision obvious is pretty dismissive of people at the time, who didn't have the benefit of hindsight. Some things that worked could really have turned out disastrous, and things that didn't were real possibilities with no way to assess the outcome without doing it.
And concerning the introduction of AI happening right now, which absolutely is disruptive, that judgement will be made by future historians. Whether it's actual intelligence or just nice math (or both of our opinions on that question) doesn't really matter if it causes big changes.
I'm not sure that's true. We've actually seen several open source projects that were vibe coded literally fold up and disappear because they ran into issues that the AI couldn't solve and no one understood them well enough to solve.
There's a reason openai/anthropic and friends are hiring shitloads of software engineers. You still need people that can understand and fix things when the AI goes off hte rails, which happens way more often than any of those companies would like to admit. Sure, "fixing things" often involves having the AI correct itself, but you still have to understand the system enough to know how/when to do that.
The direct analogy to automobiles would be for each automobile to be a oneoff design filled with bad and bizarre decisions, excessively redundant parts, insane routing of wires, lines, ducts, etc., generally poor serviceability, and so on. IMO the big question going forward is whether the consistent availability of LLMs can render these kinds of post-delivery issues moot (they will reliably [catch and] fix problems in the software they wrote before any real damage is caused), or whether human reliance on LLMs and abdication of understanding will just make software worse because LLMs' ability to fix their own mistakes, and the consequences thereof, generally breaks down in the same contexts/complexities where they made those mistakes in the first place.
My own observations are that moderately complex software written in the mode of "vibe coding" or "agentic engineering" tends to regress to barely-functional dogshit as features are piled on, and that once this state is reached, the teams behind it are unable to, or perhaps simply uninterested in, unfuck[ing] it. I have stopped using software that has gone down this path, not because I have some philosophical objection to it, but because it has become _literally unusable_. But you will certainly not catch me claiming to know what the future holds.
In any case, this is what blue-green deployments and gradual rollouts are for. With basic software engineering processes, you can make your end user experience pretty much bullet proof. Just pay EXTRA attention when touching DNS, network config (for core systems) and database migrations.
Distributed systems are a bit more tricky but k8s and the likes have pretty solid release mechanisms built-in. You are still doomed if your CDN provider goes down. You just have to draw a line somewhere and face the reality head on (for X cost per year this is the level of redundancy we get, but it won’t save us from Y).
The one thing I hadn’t mentioned - one I AM worried about - is security! I’ve been worried about it from before Mythos (basic prompt injection) and with more powerful models now team offence is stronger than ever.