In the end, I think the dream underneath this dream is about being able to manifest things into reality without having to get into the details.
The details are what stops it from working in every form it's
been tried.
You cannot escape the details. You must engage with them and solve them directly, meticulously. It's messy, it's extremely complicated and it's just plain hard.
There is no level of abstraction that saves you from this, because the last level is simply things happening in the world in the way you want them to, and it's really really complicated to engineer that to happen.
I think this is evident by looking at the extreme case. There are plenty of companies with software engineers who truly can turn instructions articulated in plain language into software. But you see lots of these not being successful for the simple reason that those providing the instructions are not sufficiently engaged with the detail, or have the detail wrong. Conversely, for the most successful companies the opposite is true.
It's a cliché that the first 90% of a software project takes 90% of the time and the last 10% also takes 90% of the time, but it's cliché because it's true. So we've managed to invent a giant plausibility engine that automates the 90% of the process people enjoy leaving just the 90% that people universally hate.
This rings true and reminds me of the classic blog post “Reality Has A Surprising Amount Of Detail”[0] that occasionally gets reposted here.
Going back and forth on the detail in requirements and mapping it to the details of technical implementation (and then dealing with the endless emergent details of actually running the thing in production on real hardware on the real internet with real messy users actually using it) is 90% of what’s hard about professional software engineering.
It’s also what separates professional engineering from things like the toy leetcode problems on a whiteboard that many of us love to hate. Those are hard in a different way, but LLMs can do them on their own better than humans now. Not so for the other stuff.
Every time we make progress complexity increases and it becomes more difficult to make progress. I'm not sure why this is surprising to many. We always do things to "good enough", not to perfection. Not that perfection even exists... "Good enough" means we tabled some things and triaged, addressing the most important things. But now to improve those little things now need to be addressed.
This repeats over and over. There are no big problems, there are only a bunch of little problems that accumulate. As engineers, scientists, researchers, etc our literal job is to break down problems into many smaller problems and then solve them one at a time. And again, we only solve them to the good enough level, as perfection doesn't exist. The problems we solve never were a single problem, but many many smaller ones.
I think the problem is we want to avoid depth. It's difficult! It's frustrating. It would be great if depth were never needed. But everything is simple until you actually have to deal with it.
"(...) maybe growing vegetables or using a Haskell package for the first time, and being frustrated by how many annoying snags there were." Haha this is funny. Interesting reading.
I once wrote software that had to manage the traffic coming into a major shipping terminal- OCR, gate arms, signage, cameras for inspecting chassis and containers, SIP audio comms, RFID readers, all of which needed to be reasoned about in a state machine, none of which were reliable. It required a lot of on the ground testing and observation and tweaking along with human interventions when things went wrong. I’d guess LLMs would have been good at subsets of that project, but the entire thing would still require a team of humans to build again today.
Don’t you understand? That’s why all these AI companies are praying for humanoid robots to /just work/ - so we can replace humans mentally and physically ASAP!
I'm sure those will help. But that doesn't solve the problem the parent stated. Those robots can't solve those real world problems until they can reason, till they can hypothesize, till they can experiment, till they can abstract all on their own. The problem is you can't replace the humans (unilaterally) until you can create AGI. But that has problem of its own, as you now have to contend with previously creating a slave class of artificial life forms.
Counterpoint: perhaps it's not about escaping all the details, just the irrelevant ones, and the need to have them figured out up front. Making the process more iterative, an exploration of medium under supervision or assistance of domain expert, turns it more into a journey of creation and discovery, in which you learn what you need (and learn what you need to learn) just-in-time.
I see no reason why this wouldn't be achievable. Having lived most of my life in the land of details, country of software development, I'm acutely aware 90% of effort goes into giving precise answers to irrelevant questions. In almost all problems I've worked on, whether at tactical or strategic scale, there's either a single family of answers, or a broad class of different ones. However, no programming language supports the notion of "just do the usual" or "I don't care, pick whatever, we can revisit the topic once the choice matters". Either way, I'm forced to pick and spell out a concrete answer myself, by hand. Fortunately, LLMs are slowly starting to help with that.
From my experience the issue really is, unfortunately, that it is impossible to tell if a particular detail is irrelevant until after you have analyzed and answered all of them.
In other words, it all looks easy in hindsight only.
I think the the most coveted ability of a skilled senior developer, is precisely this "uncanny" ability to predict beforehand if some particular detail is important or irrelevant. This ability can only be obtained through years of experience and hubris.
> perhaps it's not about escaping all the details, just the irrelevant ones
But that's the hard part. You have to explore the details to determine if they need to be included or not.
You can't just know right off the back. Doing so contradicts the premise. You cannot determine if a detail isn't important unless you get detailed. If you only care about a few grains of sand in a bucket you still have to search through a bucket of sand for those few grains
> no programming language supports the notion of "just do the usual" or "I don't care, pick whatever, we can revisit the topic once the choice matters"
Programming languages already take lots of decisions implicitly and explicitly on one’s behalf. But there are way more details of course, which are then handled by frameworks, libraries, etc. Surely at some point, one has to take a decision? Your underlying point is about avoiding boilerplate, and LLMs definitely help with that already - to a larger extent than cookie cutter repos, but none of them can solve IRL details that are found through rigorous understanding of the problem and exploration via user interviews, business challenges, etc.
Yes! I love this framing and it’s spot on. The successful projects that I’ve been involved in someone either cares deeply and resolves the details in real time or we figured out the details before we started. I’ve seen it outside software as well, someone says “I want a new kitchen” but unless you know exactly where you want your outlets, counter depths, size of fridge, type of cabinets, location of lighting, etc. ad infinitum your project is going to balloon in time and cost and likely frustration.
Is your kitchen contractor an unthinking robot with no opinions or thoughts of their own that has never used a kitchen? Obviously if you want a specific cabinet to go in a specific place in the room, you're going to have to give the kitchen contractor specifics. But assuming your kitchen contractor isn't an utter moron, they can come up with something reasonable if they know it's supposed to be the kitchen. A sink, a stove, dishwasher, refrigerator. Plumbing and power for the above. Countertops, drawers, cabinets. If you're a control freak (which is your perogative, it's your kitchen after all), that's not going to work for you. Same too for generated code. If you absolutely must touch every line of code, code generation isn't going to suit you. If you just want a login screen with parameters you define, there are so many login pages the AI can crib from that nondeterminism isn't even a problem.
At least in case of the kitchen contractor, you can trust all the electrical equipment, plumbing etc. is going to be connected in such a way that disasters won't happen. And if it is not, at least you can sue the contractor.
The problem with LLMs is that it is not only the "irrelevant details" that are hallucinated. It is also "very relevant details" which either make the whole system inconsistent or full of security vulnerabilities.
The login page example was actually perfect for illustrating this. Meshing polygons? Centering a div? Go ahead and turn the LLM loose. If you miss any bugs you can just fix them when they get reported.
But if it's security critical? You'd better be touching every single line of code and you'd better fully understand what each one does, what could go wrong in the wild, how the approach taken compares to best practices, and how an attacker might go about trying to exploit what you've authored. Anything less is negligence on your part.
You kitchen contractor will never cook in your kitchen. If you leave the decisions to them, you'll get something that's quick and easy to build, but it for sure won't have all the details that make a great kitchen. It will be average.
Which seems like an apt analogy for software. I see people all the time who build systems and they don't care about the details. The results are always mediocre.
I think this is a major point people do not mention enough during these debates on "AI vs Developers": The business/stakeholder side is completely fine with average and mediocre solutions as long as those solutions are delivered quickly and priced competitively. They will gladly use a vibecoded solution if the solution kinda sorta mostly works. They don't care about security, performance or completeness... such things are to be handled when/if they reach the user/customer in significant numbers. So while we (the devs) are thinking back to all the instances we used gpt/grok/claude/.. and not seeing how the business could possibly arrive to our solutions just with AI and wihout us in the loop... the business doesn't know any of the details nor does it care. When it comes to anything IT related, your typical business doesn't know what it doesn't know, which makes it easy to fire employees/contractors for redundancy first (because we have AI now) and ask questions later (uhh... because we have AI now).
That still requires you to evaluate all the details in order to figure out which you care about. And if you haven't built a kitchen before you, won't know what the details even are ahead of time. Which means you need to be involved in the process, constantly evaluating whether what is currently happening and if you need to care about it.
Maybe they have a kitchen without dishwasher. So unless asked they won't include one. Or even make it possible to include one. Seems like a real possibility. Maybe eventually after building many kitchens they learn they should ask about that one.
> the dream underneath this dream is about being able to manifest things into reality without having to get into the details.
Yes, it has nothing to do with dev specifically, dev "just" happens to be how to do so while being text based, which is the medium of LLMs. What also "just" happens to be convenient is that dev is expensive, so if a new technology might help to make something possible and/or make it unexpensive, it's potentially a market.
Now pesky details like actual implementation, who got time for that, it's just few more trillions away.
To put an economic spin on this (that no one asked for), this is also the capitalist nirvana. I don't have an immediate citation but from my experience software engineer salary is usually one of the biggest items on a P&L which prevents the capitalist approaching the singularity: limitless profit margin. Obviously this is unachievable but one of the major obstacles to this is in the process of being destablised and disrupted.
Well said. This dream is probably for someone who have experienced the hardship, felt frustrated and gave up. Then see others who effortless did it, even felt fun for them. The manifestation of the dream feels like revenge to them.
The argument is empty because it relies on a trope rather than evidence. “We’ve seen this before and it didn’t happen” is not analysis. It’s selective pattern matching used when the conclusion feels safe. History is full of technologies that tried to replace human labor and failed, and just as full of technologies that failed repeatedly and then abruptly succeeded. The existence of earlier failures proves nothing in either direction.
Speech recognition was a joke for half a century until it wasn’t. Machine translation was mocked for decades until it quietly became infrastructure. Autopilot existed forever before it crossed the threshold where it actually mattered. Voice assistants were novelty toys until they weren’t. At the same time, some technologies still haven’t crossed the line. Full self driving. General robotics. Fusion. History does not point one way. It fans out.
That is why invoking history as a veto is lazy. It is a crutch people reach for when it’s convenient. “This happened before, therefore that’s what’s happening now,” while conveniently ignoring that the opposite also happened many times. Either outcome is possible. History alone does not privilege the comforting one.
If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue. Output per developer is rising. Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals. The slope matters more than anecdotes. The relevant question is not whether this resembles CASE tools. It’s what the world looks like if this curve runs for five more years. The conclusion is not subtle.
The reason this argument keeps reappearing has little to do with tools and everything to do with identity. People do not merely program. They are programmers. “Software engineer” is a marker of intelligence, competence, and earned status. It is modern social rank. When that rank is threatened, the debate stops being about productivity and becomes about self preservation.
Once identity is on the line, logic degrades fast. Humans are not wired to update beliefs when status is threatened. They are wired to defend narratives. Evidence is filtered. Uncertainty is inflated selectively. Weak counterexamples are treated as decisive. Strong signals are waved away as hype. Arguments that sound empirical are adopted because they function as armor. “This happened before” is appealing precisely because it avoids engaging with present reality.
This is how self delusion works. People do not say “this scares me.” They say “it’s impossible.” They do not say “this threatens my role.” They say “the hard part is still understanding requirements.” They do not say “I don’t want this to be true.” They say “history proves it won’t happen.” Rationality becomes a costume worn by fear. Evolution optimized us for social survival, not for calmly accepting trendlines that imply loss of status.
That psychology leaks straight into the title. Calling this a “recurring dream” is projection. For developers, this is not a dream. It is a nightmare. And nightmares are easier to cope with if you pretend they belong to someone else. Reframe the threat as another person’s delusion, then congratulate yourself for being clear eyed. But the delusion runs the other way. The people insisting nothing fundamental is changing are the ones trying to sleep through the alarm.
The uncomfortable truth is that many people do not stand to benefit from this transition. Pretending otherwise does not make it false. Dismissing it as a dream does not make it disappear. If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even when the destination is not one you want to visit.
> What is happening now. What the trendlines look like. What follows if those trendlines continue. Output per developer is rising. Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals.
My dude, I just want to point out that there is no evidence of any of this, and a lot of evidence of the opposite.
> If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even
“There is no evidence” is not skepticism. It’s abdication. It’s what people say when they want the implications to go away without engaging with anything concrete. If there is “a lot of evidence of the opposite,” the minimum requirement is to name one metric, one study, or one observable trend. You didn’t. You just asserted it and moved on, which is not how serious disagreement works.
“You first, lol” isn’t a rebuttal either. It’s an evasion. The claim was not “the labor market has already flipped.” The claim was that AI-assisted coding has changed individual leverage, and that extrapolating that change leads somewhere uncomfortable. Demanding proof that the future has already happened is a category error, not a clever retort.
And yes, the self-delusion paragraph clearly hit, because instead of addressing it, you waved vaguely and disengaged. That’s a tell. When identity is involved, people stop arguing substance and start contesting whether evidence is allowed to count yet.
Now let’s talk about evidence, using sources who are not selling LLMs, not building them, and not financially dependent on hype.
Martin Fowler has explicitly written about AI-assisted development changing how code is produced, reviewed, and maintained, noting that large portions of what used to be hands-on programmer labor are being absorbed by tools. His framing is cautious, but clear: AI is collapsing layers of work, not merely speeding up typing. That is labor substitution at the task level.
Kent Beck, one of the most conservative voices in software engineering, has publicly stated that AI pair-programming fundamentally changes how much code a single developer can responsibly produce, and that this alters team dynamics and staffing assumptions. Beck is not bullish by temperament. When he says the workflow has changed, he means it.
Bjarne Stroustrup has explicitly acknowledged that AI-assisted code generation changes the economics of programming by automating work that previously required skilled human attention, while also warning about misuse. The warning matters, but the admission matters more: the work is being automated.
Microsoft Research, which is structurally separated from product marketing, has published peer-reviewed studies showing that developers using AI coding assistants complete tasks significantly faster and with lower cognitive load. These papers are not written by executives. They are written by researchers whose credibility depends on methodological restraint, not hype.
GitHub Copilot’s controlled studies, authored with external researchers, show measurable increases in task completion speed, reduced time-to-first-solution, and increased throughput. You can argue about long-term quality. You cannot argue “no evidence” without pretending these studies don’t exist.
Then there is plain, boring observation.
AI-assisted coding is directly eliminating discrete units of programmer labor: boilerplate, CRUD endpoints, test scaffolding, migrations, refactors, first drafts, glue code. These were not side chores. They were how junior and mid-level engineers justified headcount. That work is disappearing as a category, which is why junior hiring is down and why backfills quietly don’t happen.
You don’t need mass layoffs to identify a structural shift. Structural change shows up first in roles that stop being hired, positions that don’t get replaced, and how much one person can ship. Waiting for headline employment numbers before acknowledging the trend is mistaking lagging indicators for evidence.
If you want to argue that AI-assisted coding will not compress labor this time, that’s a valid position. But then you need to explain why higher individual leverage won’t reduce team size. Why faster idea-to-code cycles won’t eliminate roles. Why organizations will keep paying for surplus engineering labor when fewer people can deliver the same output.
But “there is no evidence” isn’t a counterargument. It’s denial wearing the aesthetic of rigor.
> “We’ve seen this before and it didn’t happen” is not analysis. It’s selective pattern matching used when the conclusion feels safe.
> If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue.
Wait, so we can infer the future from "trendlines", but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias...
I would argue that data points that are barely a few years old, and obscured by an unprecedented hype cycle and gold rush, are not reliable predictors of anything. The safe approach would be to wait for the market to settle, before placing any bets on the future.
> Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals.
What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
> The reason this argument keeps reappearing has little to do with tools and everything to do with identity.
Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
I don't get how anyone can speak about trends and what's currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
>Wait, so we can infer the future from “trendlines”, but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias…
If past events can be dismissed as “noise,” then so can selectively chosen counterexamples. Either historical outcomes are legitimate inputs into a broader signal, or no isolated datapoint deserves special treatment. You cannot appeal to trendlines while arbitrarily discarding the very history that defines them without committing selection bias.
When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.
>What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
These are legitimate questions, and they are all speculative. My expectation is that code quality will decline while simultaneously becoming less relevant. As LLMs ingest and reason over ever larger bodies of software, human oriented notions of cleanliness and maintainability matter less. LLMs are far less constrained by disorder than humans are.
>Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
The flaws are obvious. So obvious that repeatedly pointing them out is like warning that airplanes can crash while ignoring that aviation safety has improved to the point where you are far more likely to die in a car than in a metal tube moving at 500 mph.
Everyone knows LLMs hallucinate. That is not contested. What matters is the direction of travel. The trendline is clear. Just as early aviation was dangerous but steadily improved, this technology is getting better month by month.
That is the real disagreement. Critics focus on present day limitations. Proponents focus on the trajectory. One side freezes the system in time; the other extrapolates forward.
>I don’t get how anyone can speak about trends and what’s currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
Because many skeptics are ignoring what is directly observable. You can watch AI generate ultra complex, domain specific systems that have never existed before, in real time, and still hear someone dismiss it entirely because it failed a prompt last Tuesday.
Repeating the limitations is not analysis. Everyone who is not a skeptic already understands them and has factored them in. What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation.
At that point, the disagreement stops being about evidence and starts looking like bias.
Respectfully, you seem to love the sound of your writing so much you forget what you are arguing about. The topic (at least for the rest of the people in this thread) seems to be whether AI assistance can truly eliminate programmers.
There is one painfully obvious, undeniable historical trend: making programmer work easier increases the number of programmers. I would argue a modern developer is 1000x more effective than one working in the times of punch cards - yet we have roughly 1000x more software developers than back then.
I'm not an AI skeptic by any means, and use it everyday at my job where I am gainfully employed to develop production software used by paying customers. The overwhelming consensus among those similar to me (I've put down all of these qualifiers very intentionally) is that the currently existing modalities of AI tools are a massive productivity boost mostly for the "typing" part of software (yes, I use the latest SOTA tools, Claude Opus 4.5 thinking, blah, blah, so do most of my colleagues). But the "typing" part hasn't been the hard part for a while already.
You could argue that there is a "step change" coming in the capabilities of AI models, which will entirely replace developers (so software can be "willed into existence", as elegantly put by OP), but we are no closer to that point now than we were in December 2022. All the success of AI tools in actual, real-world software has been in tools specifically design to assist existing, working, competent developers (e.g. Cursor, Claude Code), and the tools which have positioned themselves to replace them have failed (Devin).
The pattern that gets missed in these discussions: every "no-code will replace developers" wave actually creates more developer jobs, not fewer.
COBOL was supposed to let managers write programs. VB let business users make apps. Squarespace killed the need for web developers. And now AI.
What actually happens: the tooling lowers the barrier to entry, way more people try to build things, and then those same people need actual developers when they hit the edges of what the tool can do. The total surface area of "stuff that needs building" keeps expanding.
The developers who get displaced are the ones doing purely mechanical work that was already well-specified. But the job of understanding what to build in the first place, or debugging why the automated thing isn't doing what you expected - that's still there. Usually there's more of it.
Classic Jevons Paradox - when something gets cheaper the market for it grows. The unit cost shrinks but the number of units bought grows more than this shrinkage.
Of course that is true. The nuance here is that software isn’t just getting cheaper but the activity to build it is changing. Instead of writing lines of code you are writing requirements. That shifts who can do the job. The customer might be able to do it themselves. This removes a market, not grows one. I am not saying the market will collapse just be careful applying a blunt theory to such a profound technological shift that isn’t just lowering cost but changing the entire process.
You say that like someone that has been coding for so long you have forgotten what it's like to not know how to code. The customer will have little idea what is even possible and will ask for a product that doesn't solve their actual problem. AI is amazing at producing answers you previously would have looked up on stack overflow, which is very useful. It often can type faster that than I can which is also useful. However, if we are going to see the exponential improvements towards AGI AI boosters talk about we would have already seen the start of it.
When LLMs first showed up publicly it was a huge leap forward, and people assumed it would continue improving at the rate they had seen but it hasn't.
Exactly. The customer doesn't know what's possible, but increasingly neither do we unless we're staying current at frontier speed.
AI can type faster and answer Stack Overflow questions. But understanding what's newly possible, what competitors just shipped, what research just dropped... that requires continuous monitoring across arXiv, HN, Reddit, Discord, Twitter.
The gap isn't coding ability anymore. It's information asymmetry. Teams with better intelligence infrastructure will outpace teams with better coding skills.
That's the shift people are missing.
Hey, welcome to HN. I see that you have a few LLM generated comments going here, please don’t do it as it is mostly a place for humans to interact. Thank you.
>The customer will have little idea what is even possible and will ask for a product that doesn't solve their actual problem.
How do you know that? For tech products most of the users are also technically literate and can easily use Claude Code or whatever tool we are using. They easily tell CC specifically what they need. Unless you create social media apps or bank apps, the customers are pretty tech savvy.
One example is programmers who would code physics simulations that run in massive data. You need a decent amount of software engineering skills to maintain software like that but the programmer maybe has a BS in Physics but doesn’t really know the nuances of the actual algorithm being implemented.
With AI, probably you don’t need 95% of the programmers who do that job anyway. Physicists who know the algorithm much better can use AI to implement a majority of the system and maybe you can have a software engineer orchestrate the program in the cloud or supercomputer or something but probably not even that.
Okay, the idea I was trying to get across before I rambled was that many times the customer knows what they want very well and much better than the software engineer.
Yes, I made the same point. Customers are not as dumb as our PMs and Execs think they are. They know their needs more than us, unless its about social media and banks.
I agree. People forget that people know how to use computers and have a good intuition on what they are capable of. Its the programming task that many people cant do. Its unlocking users to solve their own problems again
Have you ever paid for software? I have, many times, for things I could build myself
Building it yourself as a business means you need to staff people, taking them away from other work. You need to maintain it.
Run even conservative numbers for it and you'll see it's pretty damn expensive if humans need to be involved. It's not the norm that that's going to be good ROI
No matter how good these tools get, they can't read your mind. It takes real work to get something production ready and polished out of them
There are also technical requirements, which, in practice, you will need to make for applications. Technical requirements can be done by people that can't program, but it is very close to programming. You reach a manner of specification where you're designing schemas, formatting specs, high level algorithms, and APIs. Programmers can be, and are, good at this, and the people doing it who aren't programmers would be good programmers.
At my company, we call them technical business analysts. Their director was a developer for 10 years, and then skyrocket through the ranks in that department.
I think it's like super insane people think that anyone can just "code" an app with AI and that can replace actual paid or established open-source software, especially if they are not a programmer or know how to think like one. It might seem super obvious if you work in tech but most people don't even know what an HTTP server is or what is pytho, let alone understanding best practices or any kind of high-level thinking regarding applications and code. And if you're willing to spend that time in learning all that, might as well learn programming as well.
AI usage in coding will not stop ofc but normal people vibe coding production-ready apps is a pipedream that has many issues independent of how good the AI/tools are.
The way I would approach writing specs and requirements as code would be to write a set of unit-tests against a set of abstract classes used as arguments of such unit-tests. Then let someone else maybe AI write the implementation as a set of concrete classes and then verify that those unit-tests pass.
I'm not sure how well that would work in practice, nor why such an approach is not used more often than it is. But yes the point is that then some humans would have to write such tests as code to pass to the AI to implement. So we would still need human coders to write those unit-tests/specs. Only humans can tell AI what humans want it to do.
Anecdote: I have decades of software experience, and am comfortable both writing code myself and using AI tools.
Just today, I needed a basic web application, the sort of which I can easily get off the shelf from several existing vendors.
I started down the path of building my own, because, well, that's just what I do, then after about 30 minutes decided to use an existing product.
I have hunch that, even with AI making programming so much easier, there is still a market for buying pre-written solutions.
Further, I would speculate that this remains true of other areas of AI content generation. For example, even if it's trivially easy to have AI generate music per your specifications, it's even easier to just play something that someone else already made (be it human-generated or AI).
I've heard that SASS never really took off in China because the oversupply of STEM people have caused developer salaries to be suppressed so low that companies just hire a team of devs to build out all their needs in house. Why pay for a SASS when devs are so cheap. These are just anecdotes. Its hard for me to figure out whats really going on in China.
What if AI brings the China situation to the entire world? Would the mentality shift? You seem to be basing it on the cost benefit calculations of companies today. Yes, SASS makes sense when you have developers (many of which could be mediocre) who are so expensive that it makes more sense to just pay a company who has already gone through the work of finding good developers and spend the capital to build a decent version of what you are looking for vs a scenario where the cost of a good developer has fallen dramatically and so now you can produce the same results with far less money (a cheap developer(does not matter if they are good or mediocre) guiding an AI). That cheap developer does not even have to be in the US.
> I've heard that SASS never really took off in China because the oversupply of STEM people have caused developer salaries to be suppressed so low that companies just hire a team of devs to build out all their needs in house. Why pay for a SASS when devs are so cheap. These are just anecdotes. Its hard for me to figure out whats really going on in China.
At the high end, china pays SWEs better than South Korea, Japan, Taiwan, India, and much Europe, so they attract developers from those locations. At the low end, they have a ton of low to mid-tier developers from 3rd tier+ institutions that can hack well enough. It is sort of like India: skilled people with credentials to back it up can do well, but there are tons of lower skilled people with some ability that are relatively cheap and useful.
China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.
Thank you for the insight. Those countries you listed are nowhere near US salaries. I wonder what the SASS market is like in Europe? I hear its utilized but that the problem is that there is too much reliance on American companies.
I hear those other Asian countries are just like China in terms of adoption.
>China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.
It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the country's "stack" is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.
When I worked in China for Microsoft China, I was making 60-70% what I would have made back in the US working the same job, but my living expenses actually kind of made up for that. I learned that most of my non-Chinese asian colleagues were in it for the money instead of just the experience (this was basically my dream job, now I have to settle for working in the states for Google).
> It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the stack is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.
China lacks those big NVIDIA GPUs that were sanctioned and now export tariffed, so going with lower models that could run on hardware they could access was the best move for them. This could either work out (local LLM computing is the future, and China is ahead of the game by circumstance) or maybe it doesn't work out (big server-based LLMs are the future and China is behind the curve). I think the Chinese government would have actually preferred centralization control, and censorship, but the current situation is that the Chinese models are the most uncensored you can get these days (with some fine tuning, they are heavily used in the adult entertainment industry...haha socialist values).
I wouldn't trust the Chinese government to not do Skynet if they get the chance, but Chinese entrepreneurs are good at getting things done and avoiding government interference. Basically, the world is just getting lucky by a bunch of circumstances ATM.
Fair point! And I wasn't clear: my anecdote was me, personally, needing an instance of some software. Rather than me personally either write it by hand, or even write it using AI, and then host it, I just found an off-the-shelf solution that worked well enough for me. One less thing I have to think about.
I would agree that if the scenario is a business, to either buy an off-the-shelf software solution or pay a small team to develop it, and if the off-the-shelf solution was priced high enough, then having it custom built with AI (maybe still with a tiny number of developers involved) could end up being the better choice. Really all depends on the details.
Does that automatically translate into more openings for the people whose full time job is providing that thing? I’m not sure that it does.
Historically, it would seem that often lowering the amount of people needed to produce a good is precisely what makes it cheaper.
So it’s not hard to imagine a world where AI tools make expert software developers significantly more productive while enabling other workers to use their own little programs and automations on their own jobs.
In such a world, the number of “lines of code” being used would be much greater that today.
But it is not clear to me that the amount of people working full time as “software developers“ would be larger as well.
> Does that automatically translate into more openings for the people whose full time job is providing that thing?
Not automatically, no.
How it affects employment depends on the shapes of the relevant supply/demand curves, and I don't think those are possible to know well for things like this.
For the world as a whole, it should be a very positive thing if creating usable software becomes an order of magnitude cheaper, and millions of smart people become available for other work.
I debate this in my head way to much & from each & every perspective.
Counter argument - if what you say is true, we will have a lot more custom & personalized software and the tech stacks behind those may be even more complicated than they currently are because we're now wanting to add LLMs that can talk to our APIs. We might also be adding multiple LLMs to our back ends to do things as well. Maybe we're replacing 10 but now someone has to manage that LLM infrastructure as well.
My opinion will change by tomorrow but I could see more people building software that are currently experts in other domains. I can also see software engineers focusing more on keeping the new more complicated architecture being built from falling apart & trying to enforce tech standards. Our roles may become more infra & security. Less features, more stability & security.
Jevon's Paradox does not last forever in a single sector, right? Take manufacturing business for example. We can make more and more stuff with increasingly lower price, yet we ended up outsourcing our manufacturing and the entire sector withered. Manufacturing also gets less lucrative over the years, which means there has been less and less demand of labor.
I'm quite convinced that software (and, more broadly, implementing the systems and abstractions) seems to have virtually unlimited demand. AI raises the ceiling and broadens software's reach even further as problems that previously required some level of ingenuity or intelligence can be automated now.
You're right. I updated it to "in a single sector". The context is about the future demand of software engineers, hence I was wondering if it would be possible that we wouldn't have enough demand for such profession, despite that the entire society will benefit for the dropping unit cost and probably invented a lot of different demand in other fields.
Jevons paradox is the stupid. What happened in the past is not a guarantee for the future. If you look at the economy, you would struggle to find buyers for any slop AI can generate, but execs keep pushing it. Case in point the whole Microslop saga, where execs start treating paying customers as test subjects to please the share holders.
I felt like the article had a good argument for why the AI hype will similarly be unsuccessful at erasing developers.
> AI changes how developers work rather than eliminating the need for their judgment. The complexity remains. Someone must understand the business problem, evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve.
What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?
LLM's don't learn on their own mistakes in the same way that real developers and businesses do, at least not in a way that lends itself to RLVR.
Meaningful consequences of mistakes in software don't manifest themselves through compilation errors, but through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.
My argument would be that while some complexity remains, it might not require a large team of developers.
What previously needed five devs, might be doable by just two or three.
In the article, he says there are no shortcuts to this part of the job. That does not seem likely to be true. The research and thinking through the solution goes much faster using AI, compared to before where I had to look up everything.
In some cases, agentic AI tools are already able to ask the questions about architecture and edge cases, and you only need to select which option you want the agent to implement.
There are shortcuts.
Then the question becomes how large the productivity boost will be and whether the idea that demand will just scale with productivity is realistic.
> evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve
I think you are basing your reasoning on the current generation of models. But if future generation will be able to do everything you've listed above, what work will be there left for developers? I'm not saying that we will ever get such models, just that when they appear, they will actually displace developers and not create more jobs for them.
The business problem will be specified by business people, and even if they get it wrong it won't matter because iteration will be quick and cheap.
> What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?
The entire argument is based on assumption that models won't get better and will never be able to do things you've listed! But once they become capable of these things - what work will be there for developers?
It's not obvious at all. Some people believe that once AI can do the things I've listed, the role of developers will change instead of getting replaced (because advances always led to more jobs, not less).
We are actually already at the level of magic genie or some sci-fi level device. It can't do anything obviously but what it can is mind blowing. And the basis of argument is obviously right - potential possibility is really low bar to pass and AGI is clearly possible.
A $3 calculator today is capable of doing arithmetic that would require superhuman intelligence to do 100 years ago.
It's extremely hard to define "human-level intelligence" but I think we can all agree that the definition of it changes with the tools available to humans. Humans seem remarkably suited to adapt to operate at the edges of what the technology of time can do.
Of course in that case it will not happen this time. However, in that case software dev getting automated would concern me less than the risk of getting turned into some manner of office supply.
Imo as long as we do NOT have AGI, software-focused professional will stay a viable career path. Someone will have to design software systems on some level of abstraction.
Pre industrial revolution something like 80+ percent of the population was involved in agriculture. I question the assertion of more farmers now especially since an ever growing percentage of farms are not even owned by corporeal entities never mind actual farmers.
ooohhh I think I missed the intent of the statement... well done!
80% of the world population back then is less than 50% of the current number of people working in farming, so the assertion isn’t wrong, even if fewer people are working on farming proportionally (as it should be, as more complex, desirable and higher paid options exist)
i don't think you missed it. Perhaps sarcasm, but the main comment is specifically about programming and seems so many sub comments want to say "what about X" that's nothing to do with programming.
The machinery replaced a lot of low skill labor. But in its wake modern agriculture is now dependent on high skill labor. There are probably more engineers, geologists, climatologists, biologists, chemists, veterinarians, lawyers, and statisticians working in the agriculture sector today than there ever were previously.
Is that farm hands, or farm operators? What about corps, how do you calibrate that? Is a corp a "person" or does it count for more? My point is that maybe the definition of "farmer" is being pushed to far, as is the notion of "developer". "Prompt engineer"? Are you kidding me about that? Prompts being about as usefully copyrighted / patentable as a white paper. Do you count them as "engineers" because they say so?
I get your point, hope you get mine: we have less legal entities operating as "farms". If vibe coding makes you a "developer", working on a farm in an operating capacity makes you a "farmer". You might profess to be a biologist / agronomist, I'm sure some owners are, but doesn't matter to me whether you're the owner or not.
The numbers of nonsupervisory operators in farming activities have decreased using the traditional definitions.
Key difference being that there is only a certain amount of food that a person can physically eat before they get sick.
I think it’s a reasonable hypothesis that the amount of software written if it was, say, 20% of its present cost to write it, would be at least 5x what we currently produce.
If AI tools make expert developers a lot more productive on large software projects, while empowering non-developers to create their own little programs and automations, I am not sure how that would increase the number of people with “software developer” as their full-time job.
It happened with tools like Excel, for example, which matches your description of empowering non-developers. It happens with non-developers setting up a CMS and then, when hitting the limits of what works out of the box, hiring or commissioning developers to add more complex functions and integrations. Barring AGI, there will always be limitations, and hitting them induces the desire to go beyond.
There’s only so much land and only so much food we need to eat. The bounds on what software we need are much wider. But certainly there is a limit there as well.
Wait what? There are way less farmers than we had in the past. In many parts of the world, every member of the family was working on the farm, and now only 1 person can do the work of 5-10 people.
I think the better example is the mechanization of the loom created a huge amount of jobs in factories relative to the hand loom because the demand for clothing could not be met by the hand loom.
The craftsman who were forced to go to the factory were not paid more or better off.
There is not going to be more software engineers in the future than there is now, at least not in what would be recognizable as software engineering today. I could see there being vastly more startups with founders as agent orchestrators and many more CTO jobs. There is no way there is many more 2026 version of software engineering jobs at S&P 500 companies in the future. That seems borderline delusional to me.
>COBOL was supposed to let managers write programs. VB let business users make apps. Squarespace killed the need for web developers. And now AI.
The first line made me laugh out loud because it made me think of an old boss who I enjoyed working with but could never really do coding. This boss was a rockstar at the business side of things and having worked with ABAP in my career, I couldn't ever imagine said person writing code in COBOL.
However the second line got me thinking. Yes VB let business users make apps(I made so many forms for fun). But it reminded me about how much stuff my boss got done in Excel. Was a total wizard.
You have a good point in that the stuff keeps expanding because while not all bosses will pick up the new stack many ambitious ones will. I'm sure it was the case during COBOL, during VB and is certainly the case when Excel hit the scene and I suspect that a lot of people will get stuff done with AI that devs used to do.
>But the job of understanding what to build in the first place, or debugging why the automated thing isn't doing what you expected - that's still there. Usually there's more of it.
Honestly this is the million dollar question that is actually being argued back and forth in all these threads. Given a set of requirements, can AI + a somewhat technically competent business person solve all the things a dev used to take care of? Its possible, im wondering that my boss who couldn't even tell the difference between React and Flask could in theory...possibly with an AI with a large enough context overcomes these mental model limitations. Would be an interesting experiment for companies to try out.
Many business people I've worked with are handy with SQL, but couldn't write e.g. go or python, which always surprised me. IMO SQL is way more inconsistent and has a mental model far more distant from real life than common imperative programming (which simply parallels e.g. a cookbook recipe).
I find SQL becomes a "stepping stone" to level up for people who live and breathe Excel (for obvious reasons).
Now was SQL considered some sort of tool to help business people do more of what coders could do? Not too sure about that. Maybe Access was that tool and it just didn't stick for various reasons.
I knew a guy like that, except his tool of choice was Access. He could code, but it wasn't his strong suit, and when he was out of his element he typically delegated those responsibilities to more technical programmers, including sometimes myself. But with Access he could model a business with tables, and wire it together with VBA business logic, as easily as you and I breathe.
> The developers who get displaced are the ones doing purely mechanical work that was already well-specified.
And that hits the offshoring companies in India and similar countries probably the most, because those can generally only do their jobs well if everything has been specified to the detail.
>every "no-code will replace developers" wave actually creates more developer jobs, not fewer
you mean "created", past tense. You're basically arguing it's impossible for technical improvements to reduce the number of programmers in the world, ever. The idea that only humans will ever be able to debug code or interpret non-technical user needs seems questionable to me.
Also the percentage of adults working has been dropping for a while. Retired used to be a tiny fraction of the population that’s no longer the case, people spend more time being educated or in prison etc.
Overall people are seeing a higher standard of living while doing less work.
Efficiency is why things continue to work as fewer people work. Social programs, bank account, etc are just an abstraction you need a surplus or the only thing that changes is who starves.
Social programs often compensate for massive distortion in the economy. For example, SNAP benefits both the poor and the businesses where SNAP funds is spent on, but that's because a lot of unearned income goes to landowners, while preventing people from employing laborers and starting businesses. SNAP merely ameliorate a situation that shouldn't had arise in the first place.
So, yes, reasons other than efficiency explain why people aren't working, as well why there are still poor people.
> The total surface area of "stuff that needs building" keeps expanding.
I certainly hope so, but it depends on whether we will have more demand for such problems. AI can code out a complex project by itself because we humans do not care about many details. When we marvel that AI generates a working dashboard for us, we are really accepting that someone else has created a dashboard that meets our expectation. The layout, the color, the aesthetics, the way it interacts, the time series algorithms, and etc. We don't care, as it does better than we imagined. This, of course, is inevitable, as many of us do spend enormous time implementing what other people have done. Fortunately or unfortunately, it is very hard to human to repeat other people's work correctly, but it's a breeze for AI. The corollary is that AI will replace a lot of demand on software developers, if we don't have big enough problems to solve -- in the past 20 years we have internet, cloud, mobile, and machine learning. All big trends that require millions and millions of brilliant minds. Are we going to have the same luck in the coming years, I'm not so sure.
In the face of productivity increase and lower barrier of entry, other professionals move to capture the increase in productivity for their own members and erect barriers to prevent others from taking their tasks. In IT, we celebrate how our productivity increase benefited the broader economy, how more people in other roles could now build stuff, with the strong belief that employment of developers and adjacent roles will continue to increase and how we could get those new roles.
I think there's a parallel universe with things like system administration. I remember people not valuing windows sysadmins (as opposed to unix), because all the stuff was gui-based. lol.
This suggests that the latent demand was a lot but it still doesnt prove it is unbounded.
At some point the low hanging automation fruit gets tapped out. What can be put online that isnt there already? Which business processes are obviously going to be made an order magnitude more efficient?
Moreover, we've never had more developers and we've exited an anomalous period of extraordinarily low interest rates.
Yep, the current crunch experienced by developers is massively (but not exclusivly) on younger less experienced developers.
I was working with developer training for a while some 5-10 years back and already then I was starting to see some signs of an incoming over-saturation, the low interest rates probably masked much of it due to happy go lucky investments sucking up developers.
Low hanging and cheap automation,etc work is quickly dwindling now, especially as development firms are searching out new niches when the big "in-IT" customers aren't buying services inside the industry.
Luckily people will retire and young people probably aren't as bullish about the industry anymore, so we'll probably land in an equilebrium, the question is how long it'll take, because the long tail of things enabled by the mobile/tablet revolution is starting to be claimed.
Look at traditional manufacturing. Automation has made massive inroads. Not as much of the economy is directly supporting (eg, auto) manufacturers as it used to be (stats check needed). Nevertheless, there are plenty of mechanical engineering jobs. Not so many lower skill line worker jobs in the US any more, though. You have to ask yourself which category you are in (by analogy). Don’t be the SWE working on the assembly line.
But sign painting isn't programming? The comment is insightful and talks specifically of low and no code options creating more need for developers. Great point. has nothing to do with non programming jobs.
Well, if we’re comparing all jobs to all other jobs - then you may have a valid point. Otherwise, we should probably focus on comparing complexity and supply/demand for the skills and output being spoken about.
this works for small increments in skill or small shifts in adjacent skills.
imagine being an engineer educated in multiple instruction sets: when compilers arrive on the scene it sure makes their job easier, but that does not retroactively change their education to suddenly have all the requisite mathematics and domain knowledge of say algorithms and data structures.
what is euphemistically described as a "remaining need for people to design, debug and resolve unexpected behaviors" is basically a lie by omission: the advent of AI does not automatically mean previously representative human workers suddenly will know higher level knowledge in order to do that. it takes education to achieve that, no trivial amount of chatbotting will enable displaced human workers to attain that higher level of consciousness. perhaps it can be attained by designing software that uploads AI skills to humans...
> lowers the barrier to entry, way more people try to build things, and then those same people need actual developers when they hit the edges of what the tool can do
I was imagining companies expanding the features they wanted and was skeptical that would be close to enough, but this makes way more sense
I've watched this pattern play out in systems administration over two decades. The pitch is always the same: higher abstractions will democratise specialist work. SREs are "fundamentally different" from sysadmins, Kubernetes "abstracts away complexity."
In practice, I see expensive reinvention. Developers debug database corruption after pod restarts without understanding filesystem semantics. They recreate monitoring strategies and networking patterns on top of CNI because they never learned the fundamentals these abstractions are built on. They're not learning faster: they're relearning the same operational lessons at orders of magnitude higher cost, now mediated through layers of YAML.
Each wave of "democratisation" doesn't eliminate specialists. It creates new specialists who must learn both the abstraction and what it's abstracting. We've made expertise more expensive to acquire, not unnecessary.
Excel proves the rule. It's objectively terrible: 30% of genomics papers contain gene name errors from autocorrect, JP Morgan lost $6bn from formula errors, Public Health England lost 16,000 COVID cases hitting row limits. Yet it succeeded at democratisation by accepting catastrophic failures no proper system would tolerate.
The pattern repeats because we want Excel's accessibility with engineering reliability. You can't have both. Either accept disasters for democratisation, or accept that expertise remains required.
Where have you worked? I have seen this mentality among the smartest most accomplished people I've come across who do things like debug kernel issues at Google Cloud. Yes, those people need to really know fundamentals.
90% of people building whatever junk their company needs does not. I learned this lesson the hard way after working at both large and tiny companies. Its the people that remain in the bubble of places like AWS, GCP or people doing hard core research or engineering that have this mentality. Everyone else eventually learns.
>Excel proves the rule. It's objectively terrible: 30% of genomics papers contain gene name errors from autocorrect, JP Morgan lost $6bn from formula errors, Public Health England lost 16,000 COVID cases hitting row limits. Yet it succeeded at democratisation by accepting catastrophic failures no proper system would tolerate.
Excel is the largest development language in the world. Nothing (not Python, VB, Java etc.) can even come close. Why? Because it literally glues the world together. Everything from the Mega Company, to every government agency to even mom & pop Bed & Breakfast operations run on Excel. The least technically competent people can fiddle around with Excel and get real stuff done that end up being critical pathways that a business relies on.
Its hard to quantify but I am putting my stake in the ground: Excel + AI will probably help fix many (but not all) of those issues you talk about.
The issues I’m talking about are: “we can’t debug kernel issues, so we run 40 pods and tune complicated load balancers health-check procedures in order for the service to work well”.
There is no understanding that anything is actually wrong, for they think that it is just the state of the universe, a physical law that prevents whatever issue it is from being resolved. They aren’t even aware that the kernel is the problem, sometimes they’re not even aware that there is a problem, they just run at linear scale because they think they must.
If Kubernetes didn't in any way reduce labor, then the 95% of large corporations that adopted it must all be idiots? I find that kinda hard to believe. It seems more likely that Kubernetes has been adopted alongside increased scale, such that sysadmin jobs have just moved up to new levels of complexity.
It seems like in the early 2000s every tiny company needed a sysadmin, to manage the physical hardware, manage the DB, custom deployment scripts. That particular job is just gone now.
Kubernetes enabled qualities small companies didn't dream before.
I can implement zero downtime upgrades easily with Kubernetes. No more late-day upgrades and late-night debug sessions because something went wrong, I can commit any time of the day and I can be sure that upgrade will work.
My infrastructure is self-healing. No more crashed app server.
Some engineering tasks are standardized and outsourced to the professional hoster by using managed serviced. I don't need to manage operating system updates and some component updates (including Kubernetes).
My infrastructure can be easily scaled horizontally. Both up and down.
I can commit changes to git to apply them or I can easily revert them. I know the whole history perfectly well.
I would need to reinvent half of Kubernetes before, to enable all of that. I guess big companies just did that. I never had resources for that. So my deployments were not good. They didn't scale, they crashed, they required frequent manual interventions, downtimes were frequent. Kubernetes and other modern approaches allowed small companies to enjoy things they couldn't do before. At the expense of slightly higher devops learning curve.
You’re absolutely right that sysadmin jobs moved up to new levels of complexity rather than disappeared. That’s exactly my point.
Kubernetes didn’t democratise operations, it created a new tier of specialists. But what I find interesting is that a lot of that adoption wasn’t driven by necessity. Studies show 60% of hiring managers admit technology trends influence their job postings, whilst 82% of developers believe using trending tech makes them more attractive to employers. This creates a vicious cycle: companies adopt Kubernetes partly because they’re afraid they won’t be able to hire without it, developers learn Kubernetes to stay employable, which reinforces the hiring pressure.
I’ve watched small companies with a few hundred users spin up full K8s clusters when they could run on a handful of VMs. Not because they needed the scale, but because “serious startups use Kubernetes.” Then they spend six months debugging networking instead of shipping features. The abstraction didn’t eliminate expertise, it forced them to learn both Kubernetes and the underlying systems when things inevitably break.
The early 2000s sysadmin managing physical hardware is gone. They’ve been replaced by SREs who need to understand networking, storage, scheduling, plus the Kubernetes control plane, YAML semantics, and operator patterns. We didn’t reduce the expertise required, we added layers on top of it. Which is fine for companies operating at genuine scale, but most of that 95% aren’t Netflix.
All this is driven by numbers. The bigger you are, the more money they give you to burn. No one is really working solving problems, it's 99% managing complexity driven by shifting goalposts. Noone wants to really build to solve a problem, it's a giant financial circle jerk, everybody wants to sell and rinse and repeat z line must go up. Noone says stop because at 400mph hitting the breaks will get you killed.
People really look through rose-colored glasses when they talk about late 90s, early 2000s or whenever is their "back then" when they talk about everything being simpler.
Everything was for sure simpler, but also the requirements and expectations were much, much lower. Tech and complexity moved forward with goal posts also moving forward.
Just one example on reliability, I remember popular websites with many thousands if not millions of users would put an "under maintenance" page whenever a major upgrade comes through and sometimes close shop for hours. If the said maintenance goes bad, come tomorrow because they aren't coming up.
Proper HA, backups, monitoring were luxuries for many, and the kind of self-healing, dynamically autoscaled, "cattle not pet" infrastructure that is now trivialized by Kubernetes were sci-fi for most. Today people consider all of this and a lot more as table stakes.
It's easy to shit on cloud and kubernetes and yearn for the simpler Linux-on-a-box days, yet unless expectations somehow revert back 20-30 years, that isn't coming back.
> Everything was for sure simpler, but also the requirements and expectations were much, much lower.
This. In the early 2000s, almost every day after school (3PM ET) Facebook.com was basically unusable. The request would either hang for minutes before responding at 1/10th of the broadband speed at that time, or it would just timeout. And that was completely normal. Also...
- MySpace literally let you inject HTML, CSS, and (unofficially) JavaScript into your profile's freeform text fields
- Between 8-11 PM ("prime time" TV) you could pretty much expect to get randomly disconnected when using dial up Internet. And then you'd need to repeat the arduous sign in dance, waiting for that signature screech that tells you you're connected.
- Every day after school the Internet was basically unusable from any school computer. I remember just trying to hit Google using a computer in the library turning into a 2-5 minute ordeal.
But also and perhaps most importantly, let's not forget: MySpace had personality. Was it tacky? Yes. Was it safe? Well, I don't think a modern web browser would even attempt to render it. But you can't replace the anticipation of clicking on someone's profile and not knowing whether you'll be immediately deafened with loud (blaring) background music and no visible way to stop it.
All abstractions are leaky abstractions. E.g. C is a leaky abstraction because what you type isn't actually what gets emitted (try the same code in two different compilers and one might vectorize your loop while the other doesn't).
This all reminds me of one of the most foundational and profound papers ever written about software development: Peter Naur's "Programming as Theory Building". I have seen colleagues get excited about using Claude to write their software for them, and then end up spending at least as much time as if they had written it themselves trying to develop a theory the code that was produced, and an understanding sufficient to correct the problems and bugs in the created code.
Every professional software engineer confronts the situation of digging into and dealing with a big wad of legacy code. However, most of us prefer those occasions when we can write some code fresh, and develop a theory and deep understanding from the get-go. Reverse-engineering out a sufficient theory of legacy code to be able to responsibly modify it is hard and at times unsatisfying. I don't relish the prospect of having that be the sum total of all my effort as a software engineer, when the "legacy code" I need to struggle to understand is code generated by an AI tool.
Every abstraction simplifies a bunch of real-world phenomena. The real world is messy, our understanding keeps shifting, and we’re unreliable narrators in the sense that we’re often not even aware of the gaps in our own understanding, let alone good at expressing it.
No matter how much progress we make, as long as reasoning about complex systems is unavoidable, this doesn’t change. We don’t always know what we want, and we can’t always articulate it clearly.
So people building software end up dealing with two problems at once. One is grappling with the intrinsic, irreducible complexity of the system. The other is trying to read the minds of unreliable narrators, including leadership and themselves.
Tools help with the mechanical parts of the job, but they don’t remove the thinking and understanding bottleneck. And since the incentives of leadership, investors, and the people doing the actual work don’t line up, a tug-of-war is the most predictable outcome.
> Which brings us to the question: why does this pattern repeat?
The pattern repeats because the market incentivizes it. AI has been pushed as an omnipotent, all-powerful job-killer by these companies because shareholder value depends on enough people believing in it, not whether the tooling is actually capable. It's telling that folks like Jensen Huang talk about people's negativity towards AI being one of the biggest barriers to advancement, as if they should be immune from scrutiny.
They'd rather try to discredit the naysayers than actually work towards making these products function the way they're being marketed, and once the market wakes up to this reality, it's gonna get really ugly.
We are allowed as in, no formal rule forbid it is one thing. But if all the rules favors oligarchic accumulation with reinforcing loop, it's unlikely that the resulting dynamics will fall into a loophole of equal redistribution of wealth where social harmony thrive.
Yes very much so, if they could make their product do the things they claim they would be focused on doing that, not telling people to stop being naysayers.
As I have heard from mid level managers and C suite types across a few dev jobs. Staff are the largest expense and the technology department is the largest cost center. I disagree because Sales couldn't exist with a product but that's a lost point.
This is why those same mid level managers and C suite people are salivating over AI and mentioning it in every press release.
The reality is that costs are being reduced by replacing US teams with offshore teams. And the layoffs are being spun as a result of AI adoption.
AI tools for software development are here to stay and accelerate in the coming months and years and there will be advances. But cost reductions are largely realized via onshore/offshore replacement.
The remaining onshore teams must absorb much more slack and fixes and in a way end up being more productive.
> The reality is that costs are being reduced by replacing US teams with offshore teams.
Hailing from an outsourcing destination I need to ask: to where specifically? We've been laid off all the same. Me and my team spent the second half of 2025 working half time because that's the proposition we were given.
What is this fabled place with an apparent abundance of highly skilled developers? India? They don't make on average much less than we do here - the good ones make more.
My belief is that spending on staff just went down across the board because every company noticed that all the others were doing layoffs, so pressure to compete in the software space is lower. Also all the investor money was spent on datacentres so in a way AI is taking jobs.
At a very large company at the momen: One of the things I've noticed is as translation has improved, C level preferences and political considerations have made a much bigger impact.
So we will reduce headcount in some countries because of things like (perceived) working culture, and increase based on the need to gain goodwill or fulfil contracts from customers.
This can also mean that the type of work outsources can change pretty quickly. We are getting rid of most of the "developers" in India, because places like Vietnam and eastern Europe are now less limited by language, and are much better to work with. At the same time we are inventing and outsourcing other activities to India because of a desire to sell in their market.
It works. But for most it is not sustainable. It in most cases collapses eventually. But ideas and words and now pictures and videos do sell as in get pre-orders or pre-payments.
Many companies aren't selling anything special or are just selling an "idea".
Like liquid death sells water for a strangely high amount of money - entirely sales / marketing.
International Star Registry gives you a piece of paper and a row in a database that says you own a star.
Many luxury things are just because it's sold by that luxury brand. They are "worth" that amount of money for the status of other people knowing you paid that much for it.
Science is hated because its mastery requires too much hard work, and, by the same token, its practitioners, the scientists, are hated because of their power they derive from it. - Dijkstra '1989
It's not so much about replacing developers, but rather increasing the level of abstraction developers can work at, to allow them to work on more complex problems.
The first electronic computers were programmed by manually re-wiring their circuits. Going from that to being able to encode machine instructions on punchcards did not replace developers. Nor did going from raw machine instructions to assembly code. Nor did going from hand-written assembly to compiled low-level languages like C/FORTRAN. Nor did going from low-level languages to higher-level languages like Java, C++, or Python. Nor did relying on libraries/frameworks for implementing functionality that previously had to be written from scratch each time. Each of these steps freed developers from having to worry about lower-level problems and instead focus on higher-level problems. Mel's intellect is freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
(The thing that distinguishes gen-AI from all the previous examples of increasing abstraction is that those examples are deterministic and often formally verifiable mappings from higher abstraction -> lower abstraction. Gen-AI is neither.)
> It's not so much about replacing developers, but rather increasing the level of abstraction developers can work at, to allow them to work on more complex problems.
Thats not the goal the Anthropic's CEO has. Nor does any other CEO for that matter.
> It's not so much about replacing developers, but rather increasing the level of abstraction developers can work at, to allow them to work on more complex problems.
People do and will talk about replacing developers though.
Were many of the aforementioned advancements marketed as "replacing developers"? Absolutely. Did that end up happening? Quite the opposite; each higher-level abstraction only caused the market for software and demand for developers to grow.
That's not to say developers haven't been displaced by abstraction; I suspect many of the people responsible for re-wiring the ENIAC were completely out of a job when punchcards hit the scene. But their absence was filled by a greater number of higher-level punchcard-wielding developers.
the infinite-fountain-of-software machine seems more likely to replace developers than previous innovations, and the people pushing the button will not be, in any current sense of the word, programming
You absolutely need to be trying to accomplish these things personally to understand what is/will be easy and where the barriers.
Recognizing the barriers & modes of failure (which will be a moving target) lets you respond competently when you are called. Raise your hourly rate as needed.
One of my clients is an AI startup in the security industry. Their business model is to use AI agents to perform the initial assessment and then cut the security contractors hours by 50% to complete the job.
I don't think AI will completely replace these jobs, but it could reduce job numbers by a very large amount.
I think one thing I've heard missing from discussions though is that each level of abstraction needs to be introspectable. LLMs get compared to compilers a lot, so I'd like to ask: what is the equivalent of dumping the tokens, AST, SSA, IR, optimization passes, and assembly?
That's where I find the analogy on thin ice, because somebody has to understand the layers and their transformations.
“Needs to be” is a strong claim. The skill of debugging complex problems by stepping through disassembly to find a compiler error is very specialized. Few can do it. Most applications don’t need that “introspection”. They need the “encapsulation” and faith that the lower layers work well 99.9+% of the time, and they need to know who to call when it fails.
I’m not saying generative AI meets this standard, but it’s different from what you’re saying.
Sorry, I should clarify: it's needs to be introspectable by somebody. Not every programmer needs to be able to introspect the lower layers, but that capability needs to exist.
Now I guess you can read the code an LLM generates, so maybe that layer does exist. But, that's why I don't like the idea of making a programming language for LLMs, by LLMs, that's inscrutable by humans. A lot of those intermediate layers in compilers are designed for humans, with only assembly generation being made for the CPU.
> increasing the level of abstraction developers can work at
Something is lost each step of the abstraction ladder we climb. And the latest rung uses natural language which introduces a lot of imprecision/slop, in a way that prior abstractions did not. And, this new technology providing the new abstraction is non-deterministic on top of that.
There's also the quality issue of the output you do get.
I don't think the analogy of the assembly -> C transition people like to use holds water – there are some similarities but LLMs have a lot of downsides.
I think the thing that’s so weird to me is this idea that we have to all somehow internalize the concept of transistor switching as the foundational unchangeable root of computing and therefore anything that is too far abstract from that is not somehow real computing or something mess like that
Again ignoring completely that when you would program vacuum tube computers it was an entirely different type of abstraction than you do with Mosfets for example
I’m finding myself in the position where I can safely ignore any conversation about engineering with anybody who thinks that there is a “right” way to do it or that there’s any kind of ceremony or thinking pattern that needs to stay stable
Those are all artifacts of humans desiring very little variance and things that they’ve even encoded because it takes real energy to have to reconfigure your own internal state model to a new paradigm
AI won't replace developers. It will replace the bootcamp devs of the last decade. The average expectation is now much higher. AI tools will only elevate the expectations of what a human dev is capable of and how fast it can get done.-
The reverse is developer's recurring dream of replacing non-IT people, usually with a 100% online automated self promoting SaaS. AI is also the latest incarnation of that.
The way I learned to write software was years of cutting my teeth on hard problems. I have to wonder what happens when the new developers coming up don’t have that teeth cutting experience because they use language models to assist with every algorithm, etc?
I was skeptical until 3-4 months ago, but my recent experience has been entirely different.
For context: we're the creators of ChatBotKit and have been deploying AI agents since the early days (about 2 years ago). These days, there's no doubt our systems are self-improving. I don't mean to hype this (judge for yourself from my skepticism on Reddit) but we're certainly at a stage where the code is writing the code, and the quality has increased dramatically. It didn't collapse as I was expecting.
What I don't know is why this is happening. Is it our experience, the architecture of our codebase, or just better models? The last one certainly plays a huge role, but there are also layers of foundation that now make everything easier. It's a framework, so adding new plugins is much easier than writing the whole framework from scratch.
What does this mean for hiring? It's painfully obvious to me that we can do more with less, and that's not what I was hoping for just a year ago. As someone who's been tinkering with technology and programming since age 12, I thought developers would morph into something else. But right now, I'm thinking that as systems advance, programming will become less of an issue—unless you want to rebuild things from scratch, but AI models can do that too, arguably faster and better.
I'm seeing it too, but there's a distinction I think matters: AI isn't replacing the thinking, it's shifting where the bottleneck is.
You mention systems are self-improving and code quality has increased dramatically. But the constraint isn't execution anymore. It's judgment at scale.
When AI collapses build time from weeks to hours, the new bottleneck becomes staying current with what's actually changing. You need to know what competitors shipped, what research dropped, what patterns are emerging across 50+ sources continuously.
Generic ChatGPT can't do that. It doesn't know what YOU care about. It starts from scratch every time.
The real question is how do you build personal AI that learns YOUR priorities and filters the noise? That's where the leverage is now.
> You need to know what competitors shipped, what research dropped, what patterns are emerging across 50+ sources continuously. Generic ChatGPT can't do that.
You're saying that a pattern recognition tool that can access the web can't do all of this better than a human? This is quintessentially what they're good at.
> The real question is how do you build personal AI that learns YOUR priorities and filters the noise? That's where the leverage is now.
Sounds like another Markdown document—sorry, "skill"—to me.
It's interesting to see people praising this technology and enjoying this new "high-level" labor, without realizing that the goal of these companies is to replace all cognitive labor. I strongly doubt that they will actually succeed at that, and I don't even think they've managed to replace "low-level" labor, but pretending that some cognitive labor is safe in a world where they do succeed is wishful thinking.
If these agent are so great was isn't ChatBotKit a highly successfully public company worth hundreds of billions and not just a glorified chatgpt wrapper? If you're able to do so much with so little why isn't that actually bearing out in becoming a profitable company? What's the excuse?
Do people really need to know that a bunch of code at a company that won't exist in 10 years is something worth caring about?
Because we are not hyping to lure investors to give us hundreds of millions dollars. We took the more honest route and work with actual customers. If we are to accept hundreds of millions at some point perhaps we are going to reach hundreds of billions in valuation ... on paper.
As for the chatgpt wrapper comment - honestly this take is getting old. So what? You are going to train your own LLM and run it at huge loss for awhile?
And yes perhaps all of this effort is for nothing as it may be even possible to reacted everything we have done from scratch in a week assuming that we are static and do nothing about it. In 10 years the solution would have billions of lines of code. Not that lines of code is any kind of metric for success but you wont be able to recreate it without significant cost and upfront effort ... even with LLMs.
But still the hypothesis of the article holds stance. If you want a new feature, you still have to think it through and explain the AI how to implement it, and validate the result.
You might be able to do more with less, but that is with every technological advancement.
Regarding your experience, it sounds like your codebase is such good quality that it acts as a very clear prompt to the AI for it to understand the system and improve it.
But I imagine your codebase didn't get into this state all by itself.
Yeah, the latest wave of Opus 4.5, Codex 5.2, Gemini Pro 3 rendered a lot of my skepticism redundant as well. While I generally agree with the Jevon's paradox line of reasoning, I have to acknowledge it's difficult to make any reasonable prediction on technology that's moving at such immense speed.
I expected the LLM's would have hit a scaling wall by now, and I was wrong. Perhaps that'll still happen. If not, regardless of whether it'll ultimately create or eliminate more jobs, it'll destabilize the job market.
My guess: projects "learn" every time we improve documentation, add static analysis, write tests, make the API's clearer, and so on. Once newly started agents onboard by reading AGENTS.md, they're a bit "smarter" than before.
Maybe there's a threshold where improvements become easy, depending on the LLM and the project?
As a hobbyist programmer, I feel like I've been promoted to pointy-haired boss.
The pattern I've noticed building tooling for accountants: automation rarely removes jobs, it changes what the job looks like.
The bookkeepers I work with used to spend hours on manual data entry. Now they spend that time on client advisory work. The total workload stayed the same - the composition shifted toward higher-value tasks.
Same dynamic played out with spreadsheets in the 80s. Didn't eliminate accountants - it created new categories of work and raised expectations for what one person could handle.
The interesting question isn't whether developers will be replaced but whether the new tool-augmented developer role will pay less. Early signs suggest it might - if LLMs commoditise the coding part, the premium shifts to understanding problems and systems thinking.
I would add on that the most of the premium of a modern SWE has always been on understanding problems and systems thinking. LLMs raise the floor and the ceiling, to where the vast majority of it will now be on systems and relationships
Machine learning is nothing like integer programming. It is an emulation of biological learning, it is designed explicitly to tackle the same problems human minds excel at. It is an organism in direct competition with human beings. Nothing can be more dangerous than downplaying this.
This is because the demand for most of what accountants do is driven by government regulations and compliance. Something that always expands to fill the available budget.
We could have replaced tons of developers if only employers were selective in their hiring and invested in training. Instead there are a ton of hardly marginal developers in employment.
Case in point: web frameworks as mentioned in the article. These frameworks do not exist to increase productivity for either the developer or the employer. They exist to mitigate training and lower the bar so the employer has a wider pool of candidates to select from.
I disagree. A good framework makes code more maintainable, and makes it so you can focus on what’s important or unique to your product. It certainly makes you faster.
That depends on what you are comparing against. If a given developer is incapable of writing an application without a framework then they will certainly be more productive with a framework.
It’s like a bulldozer is certainly faster than a wheelchair, but somebody else might find them both slow.
Eh. I’ve written plenty of applications by hand before there were good frameworks— win32 apps, old school web applications, “modern” SPA-like apps before there was a React. I’m more productive with React + Tailwind than I was with anything (other than maybe VB6). Being able to reason about your UI as a (mostly) pure function of state is powerful. It reminds me of the simplicity of game development— with a proper rendering layer, your developers can focus mostly on modeling their problem rather than UI complexities.
Sometimes while on an ai thread like this I see posts with obvious and many grammatical mistakes. Many will be "typos" (although some seem conceptual). Maybe some are dictated/transcribed by busy people. Some might be incorrect on purpose, for engament. These are posted by pretty accomplished people sometimes.
And I always think: any of these users could have ran a basic grammar check with an llm or even a spellchecker, but didnt. Maybe software will be the same after all.
P.S. prob I jinxed my own post and did a mistake somewhere
Indeed. To be fair things like didnt and prob are on purpose, and "sometimes" and "obvious and many" are more styling than a mistake. And LLM should be uppercase. We can go on. In any case, that is exactly my point. I could have run it through an LLM, but didnt.
Here what deepseek suggests as fixed:
Sometimes, while on an AI thread like this, I see posts with many obvious grammatical mistakes. Many will be "typos" (although some seem conceptual). Maybe some are dictated or transcribed by busy people. Some might be incorrect on purpose, for engagement. These are sometimes posted by pretty accomplished people.
And I always think: any of these users could have run a basic grammar check with an LLM or even a spellchecker, but didn’t. Maybe software will be the same after all.
P.S. Probably I jinxed my own post and made a mistake somewhere.
Can semi-technical people replace developers if those semi-technical people accept that the price of avoiding developers is a commitment to minimizing total system complexity?
Of course semi-technical people can troubleshoot, it's part of nearly every job. (Some are better at it than others.)
But how many semi-technical people can design a system that facilitates troubleshooting? Even among my engineering acquaintances, there are plenty who cannot.
My guess is no. I’ve seen people talk about understanding the output of their vibe coding sessions as “nerdy,” implying they’re above that. Refusing the vet AI output is the kiss of death to velocity.
> Refusing the vet AI output is the kiss of death to velocity.
The usual rejoinder I've seen is that AI can just rewrite your whole system when complexity explodes. But I see at least two problems with that.
AI is impressively good at extracting intent from a ball of mud with tons of accidental complexity, and I think we can expect it to continue improving. But when a system has a lot of inherent complexity, and it's poorly specified, the task is harder.
The second is that small, incremental, reversible changes are the most reliable way to evolve a system, and AI doesn't repeal that principle. The more churn, the more bugs — minor and major.
> The usual rejoinder I've seen is that AI can just rewrite your whole system when complexity explodes.
Live and even offline data transformation and data migration without issues are still difficult problems to solve even for humans. It requires meticulous planning and execution.
A rewrite has to either discard the previous data or transform or keep the data layer intact across versions which means more and more tangled spaghetti accumulated over rewrites.
Managers and business owners shouldn't take it personally that I do as little as possible and minimize the amount of labor I provide for the money I receive.
> Don't take it personal. All business want to reduce costs. As long as people cost money, they'll want to reduce people.
"Don't take it personal" does not feed the starving and does not house the unhoused. An economic system that over-indexes on profit at the expense of the vast majority of its people will eventually fail. If capitalism can't evolve to better provide opportunities for people to live while the capital-owning class continues to capture a disproportionate share of created economic value, the system will eventually break.
You're absolutely correct on that. The technology industry, at least the segment driven by VC (which is a huge portion of it), is funded based on ideas that the capital-owning class thinks is a good idea. Reducing labor costs is always an easy sell when you're trying to raise a round.
Even in boring development jobs. For example, one of my first development jobs was for a large hospital, building an intranet app to make nurse rounds more efficient so they didn't have to hire as many.
Some businesses want to reduce costs. Some want to tackle the challenge of using resources available in the most profitable manner, including making their employees grow to better contribute in tackling tomorrow's challenges.
A business leader board that only consider people as costs are looking at the world through sociopath lenses.
I don't think the dream of replacing developers, in particular exists. Specialization of labour leads to increased costs, due to value placed on said specialized labour. Software development, is one form of specialized manufacturing, and hence is more costly. Within software development, similar strata exists, leading to increased value on increased specialization, and hence the pyramid effect. The same is true within any field.
Similarly, one might argue as increased capital finds its way to a given field, due to increased outcomes, labour in turn helps pressure pricing. Increased "sales" opportunity within said field (i.e people being skilled enough to be employed, or specialized therein) will similarly lead to pricing pressure - on both ends.
This wave of AI innovation reveals that a lot of activity in coding turns out to be of accidental complexity instead of essential. Or put it another way, a lot of tasks in coding is conceptual to human, but procedural to AI. Conceptual tasks require intuitive understanding, rigorous reasoning, and long-term planning. AI is not there yet. On the other hand, procedural tasks are low entropy with high priors: once a prompt is given, what follows is almost certain. For instance, one had to learn many concepts to write "public static void main(String[] args)" when writing Java code in the old days. But for AI, the conditional probability Pr(write "public static void main(String[] args)" | prompt = "write the entry method for a given class") is practically 1. Or if I'd like to use Python to implement linear regression, there will be pretty much one way to implement it right, and AI knows about it - nothing magical, but only because we human have been doing so for years and the optimal solution for most of the cases have converged, so it turns into procedural to AI.
Fortunate or unfortunate, many procedural tasks are extremely hard for humans to master, but easy to AI to generate. In the meantime, we structured our society to support such procedural work. As the wave of innovation spreads, many people will rise but many will also suffer.
You understate the capabilities of the latest gen LLMs. I can typically describe a user's bug in a few sentences or tell Claude to check fetch the 500 error in Cloud run logs and it will explain the root cause, propose a fix, and throw in new unit test in a two minutes.
>The tools expanded who could write software, but they didn’t eliminate the expertise required for substantial systems.
The hardest thing about software construction is specification. There's always going to be domain specific knowledge associated with requirements. If you make it possible, as Delphi and Visual Basic 6 did, for a domain expert to hack together something that works, that crude but effective prototype functions as a concrete specification that a professional programmer can use to craft a much better version useful to more people than just the original author.
The expansion of the pool of programmers was the goal. It's possible that AI could eventually make programming (or at least specification) a universal skill, but I doubt it. The complexity embedded in all but the most trivial of programs will keep the software development profession in demand for the foreseeable future.
In 00s, Rational Rose UML was a mandatory course in my Uni undergrad program.
At that time I had a chat with a small startup CEO who was sure that he'll fire all those pesky programmers who think they are "smart" because they can code. He pointed me to a code generated by Rational Rose for his diagram, and told that only methods should be implemented, which also will be possible soon, the hardest part is to model the system.
This is looking at the wrong end of the telescope. The arc has been to move computing closer to more and more end users. In the 1960's, FORTRAN enabled scientists and engineers to implement solutions without knowing much about the underlying computer. Thompson and Ritchie got a PDP11 by promising to make a text processing system for patent applications. Many years later desktop PC's and programs like VisiCalc and PageMaker opened up computing to many more users. The list goes on and on. With this movement, developer jobs disappeared or changed.
I keep saying the real advancement by LLMs isn't for professional programmers, but for every job that is programming adjacent. Every biologist writing code to do analysis. Every test engineer interfacing with test results and graphing results. (eg all the instruments from cold weather testing) Anyone that's figured out you can glue Jira to a local LLM and then have voice command Jira. Etc.
> Yet demand for software far exceeds our ability to create it.
In particular the demand for software tools grows faster than our ability to satisfy it. More demand exists than the people who would do the demanding can imagine. Many people who are not software engineers can now write themselves micro software tools using LLMs -- this ranges from home makers to professionals of every kind. But the larger systems that require architecting, designing, building, and maintaining will continue to require some developers -- fewer, perhaps, but perhaps also such systems will proliferate.
The link redirects back to the blog index if your browser is configured in Spanish, because it forces to change the language to spanish and the article is not available in spanish.
I recently did a higher education contract for one semester in a highly coding focused course. I have a few years of teaching experience pre-LLMs so I could evaluate the impact internally, my conclusion is that academic education as we know it is basically broken forever.
If educators use AI to write/update the lectures and the assignments, students use AI to do the assignments, then AI evaluates the student's submissions, what is the point?
I'm worried about some major software engineering fields experiencing the same problem. If design and requirements are written by AI, code is mostly written by AI, and users are mostly AI agents. What is the point?
I agree in higher education you need to be willing to learn and it's easy to weasel through it without actually building any skills. On an individual level that's a tragedy of wasted time and potential. On the teaching side it's just fraud if you let AI correct the work of your students or if you don't penalize people handing in AI-written assignments.
In the US there was this case of a student using religious arguments with hand-waving references to the will of god for her coursework. Her work was rejected by the tutor and she raised a big fuzz on TV. In the end this US university fired the tutor and gave her a passing grade.
These kind of stories are not an AI issue but a general problem of USA as a country shifting away from education towards religious fanaticism. If someone can reference their interpretation of god's words without even actually citing the bible and they receive a passing grade the whole institution loses their credibility.
Today, the United States are a post-factual society with a ruling class of christian fanatics. They have been vulnerable to vaporware for years. LLMs being heralded as artificial intelligence only works with people who never experienced real intelligence.
Luckily, every year only a handful of people who have motivation, skills and luck are needed to move the needle in science and technology. These people can come from many countries who have better education systems and no religious fanaticism.
It's simple -- the more high-minded and snobbish the developer class will be (thus extracting the highest salaries in the world) and as long as they will continue to maintain this unreal amount of gatekeeping, the more the non-developer community (especially those at the leadership-level) will continue to revel at the prospect of eliminating developers from the value chain.
I think you're onto something. Replace "developers" with "doctors" I that statement and you've described healthcare in the mid 1900s. Replace with "masons" and we describe the medieval times. There is always a specialized class
> Understanding this doesn’t mean rejecting new tools. It means using them with clear expectations about what they can provide and what will always require human judgment.
Speaking of tools, that style of writing rings a bell.. Ben Affleck made a similar point about the evolving use of computers and AI in filmmaking, wielded with creativity by humans with lived experiences, https://www.youtube.com/watch?v=O-2OsvVJC0s. Faster visual effects production enables more creative options.
Consider what the rise of things like shopify, squarespace, etc. did for developers.
In 2001, you needed an entire development team if you wanted to have an online business. Having an online business was a complicated, niche thing.
Now, because it has gotten substantially easier, there are thousands of times as many (probably millions of times) online stores, and many of them employ some sort of developer (usually on a retainer) to do work for them. Those consultants probably make more than the devs of 2001 did, too.
The real reason is, expectations and requirements increased whenever tools helped more productivity or solved problems. This kept complexity growing and the work flowing. Just because you use cars instead of horses, it doesn't mean you get more free time.
They've already convinced their customers what the value of the product is! Cutting labor costs is profit! Never mind the cost to society! Socialize those costs and privatize those profits!
Then they keep the money for themselves, because capitalism lets a few people own the means of production.
So everything that looks cheaper than paying someone educated and skilled to do a thing is extremely attractive. All labor-saving devices ultimate do that.
Consider what happened to painters after the invention of photography (~1830s). At first the technology was very limited and no threat at all to portrait and landscape painters.
By the 1860s artists were feeling the heat and responded by inventing all the "isms" - starting with impressionism. That's kept them employed so far, but who knows whether they'll be able to co-exist with whatever diffusion models become in 30 years.
But the 18th century artist who did portraits and wedding paintings is the today’s (wedding) photographer.
Does it take less money to commission a single wedding photo rather than a wedding painting? Yes. But many more people commission them and usually in tens to hundreds, together with videos, etc.
An 18th century wedding painter wasn’t in the business of paintings, but in the business of capturing memories and we do that today on much larger scale, more often and in a lot of different ways.
I’d also argue more landscape painters exist today than ever.
Is this a real article or just AI-generated text? This whole text has a lot of very weird phrasing in it, also it's so strange how it just seems to keep trudging on and on without ever getting to the point. Actual human-written articles are not like this.
It might just be companies I have worked for in past 25 years, but engineers were virtually always the ones to make sense of whatever vague idea product and UX were trying to make. It's not just code monkey follow the mockup stuff. AI code tools don't really solve that.
This is very accurate to my experience. Product & management dont understand basics and anyone who ever had a manager/pm, you know you had to explain to them the same thing multiple times. Product Managers also struggle to align among themselves and they dont care about future velocity, just current velocity. Then you have programmers who have to basically connect all the things and make sure it doesn't break too much.
> We’re still in that same fundamental situation. We have better tools—vastly better tools—but the thinking remains essential.
But less thinking is essential, or at least that’s what it’s like using the tools.
I’ve been vibing code almost 100% of the time since Claude 4.5 Opus came out. I use it to review itself multiple times, and my team does the same, then we use AI to review each others’ code.
Previously, we whiteboarded and had discussions more than we do now. We definitely coded and reviewed more ourselves than we do now.
I don’t believe that AI is incapable of making mistakes, nor do I think that multiple AI reviews are enough to
understand and solve problems, yet. Some incredibly huge problems are probably on the horizon. But for now, the general “AI will not replace developers” is false; our roles have changed- we are managers now, and for how long?
Those whiteboarding sessions and discussions used to serve as useful opportunities for context building. Where will that context be built within the cycle now? During a production incident?
This resonates with what I'm experiencing, but I think the article misses the real shift happening now.
The conversation shouldn't be "will AI replace developers". It should be "how do humans stay competitive as AI gets 10x better every 18 months?"
I watched Claude Code build a feature in 30 minutes that used to take weeks. That moment crystallised something: you don't compete WITH AI. You need YOUR personal AI.
Here's what I mean: Frontier teams at Anthropic/OpenAI have 20-person research teams monitoring everything 24/7. They're 2-4 weeks ahead today. By 2027? 16+ weeks ahead. This "frontier gap" is exponential.
The real problem isn't tools or abstraction. It's information overload at scale. When AI collapses execution time, the bottleneck shifts to judgment. And good judgment requires staying current across 50+ sources (Twitter, Reddit, arXiv, Discord, HN).
Generic ChatGPT is commodity. What matters is: does your AI know YOUR priorities? Does it learn YOUR judgment patterns? Does it filter information through YOUR lens?
The article is right that tools don't eliminate complexity. But personal AI doesn't eliminate complexity. It amplifies YOUR ability to handle complexity at frontier speed.
The question isn't about replacement. It's about levelling the playing field.
And frankly we all are figuring out on how will this shape out in the future. And if you have any solution that can help me level up, please hit me up.
> And good judgment requires staying current across 50+ sources (Twitter, Reddit, arXiv, Discord, HN).
Your mention of the hellhole that is today's twitter as the first item in your list of sources to follow for achieving "good judgement" made it easy for me to recognize that in fact you have very bad judgement.
This isn't that impressive when there are mountains of training data dealing with exactly this... how about something truly unique and not something already available to the masses in hundreds of different forms?
Like cool, you killed boiled a few gallons of the ocean but are you really impressed that you made a basic music app that is extremely limited?
So we’re now in a world where this isn’t impressive anymore? How quickly expectations change. Having started with basic and then 6502 assembly over 40 years ago, this still feels like science fiction to me.
But most enterprise software does not need to be innovative, its needs to be customizable enough that enterprises can differentiate their business. This makes existing software ideas so much more configurable. No more need for software to provide everything and the kitchen sink, but exactly that what you as a customer want.
Like in my example, I don’t know of any software that has exactly this feature set. Do you?
I’ve seen first hand people talk big about how they used LLMs on a project and it’s clear they’ve only done the first 80%. Yeah they’re good tools. But they also enable laziness.
>>>> Developers feel misunderstood and undervalued.
Really?
Is this reflected in wages and hiring? I work for a company that makes a hardware product with mission-critical support software. The software team dwarfs the hardware team, and is paid quite well. Now they're exempt from "return to office."
I attended a meeting to move a project into development phase, and at one point the leader got up and said: "Now we've been talking about the hardware, but of course we all know that what's most important is the software."
We succeeded each time. We replaced the 60s dev with a 70s dev with an 80s dev... Same title different job description.
I can see the 2030s dev doing more original research with mundane tasks put to LLM. Courses will cover manual coding, assembler etc. for a good foundation. But that'll be like an uber driver putting on a spare tire.
Who remembers Model-Driven Architecture and code generation from UML?
Nothing can replace code, because code is design[1]. Low-code came about as a solution to the insane clickfest of no-code. And what is low-code? It’s code over a boilerplate-free appropriately-high level of abstraction.
This reminds me of the 1st chapter of the Clean Architecture book[2], pages 5 and 6, which shows a chart of engineering staff growing from tens to 1200 and yet the product line count (as a simple estimate of features) asymptotically stops growing, barely growing in lines of code from 300 staff to 1200 staff.
As companies grow and throw more staff at the problem, software architecture is often neglected, dramatically slowing development (due to massive overhead required to implement features).
Some companies decided that the answer is to optimize for hiring lots of junior engineers to write dumbed down code full of boilerplate (e.g. Go).
The hard part is staying on top of the technical (architectural and design) debt to make sure that feature development is efficient. That is the hard job and the true value of a software architect, not writing design documents.
Citizen developers were already there doing Excel. I have seen basically full fledged applications in Excel since I was in high school which was 25 years ago already.
If anything, there were a bunch of low barrier to entry software development options like HyperCard, MS Access, Visual Basic, Delphi, 4GLs etc. around in the 90s, that went away.
It feels like programming then got a lot harder with internet stuff that brought client-server challenges, web frontends, cross platform UI and build challenges, mobile apps, tablets, etc... all bringing in elaborate frameworks and build systems and dependency hell to manage and move complexity around.
With that context, it seems like the AI experience / productivity boost people are having is almost like a regression back to the mean and just cutting through some of the layers of complexity that had built up over the years.
And I would argue speadsheets still created more developers. Analytics teams need developers to put that data somewhere, to transform it for certain formats, to load that data from a source so they can create spreadsheets from it.
So now instead of one developer lost and one analyst created, you've actually just created an analyst and kept a developer.
Tim Bryce was kind of the anti Scott Adams: he felt that programmers were people of mediocre intelligence at best that thought they were so damn smart, when really if they were so smart, they'd move into management or business analysis where they could have a real impact, and not be content with the scutwork of translating business requirements into machine-executable code. As it is, they don't have the people skills or big-picture systems thinking to really pull it off, and that combined with their snobbery made them a burden to an organization unless they were effectively managed—such as with his methodology PRIDE, which you could buy direct from his web site.
Oddly enough, in a weird horseshoe-theory instance of convergent psychological evolution, Adams and Bryce both ended up Trump supporters.
Ultimately, however, "the Bryce was right": the true value in software development lies not in the lines of code but in articulating what needs to be automated and how it can benefit the business. The more precisely you nail this down, the more programming becomes a mechanical task. Your job as a developer is to deliver the most value to the customer with the least possible cost. (Even John Carmack agrees with this.) This requires thinking like a business, in terms of dollars and cents (and people), not bits and bytes. And as AI becomes a critical component of software development, business thinking will become more necessary and technical thinking, much less so. Programmers as a professional class will be drastically reduced or eliminated, and replaced with business analysts with some technical understanding but real strength on the business/people side, where the real value gets added. LLMs meaningfully allow people to issue commands to computers in people language, for the very first time. As they evolve they will be more capable of implementing business requirements expressed directly in business language, without an intermediator to translate those requirements into code (i.e., the programmer). This was always the goal, and it's within reach.
In my experience translating requirements into a formal language (programming language) is where a lot of the important details are actually worked out. The process of taking the "squishy" thoughts/ideas and translating them into code is a forcing function for actually clarifying and correcting those ideas.
What I’m seeing is that seniors need fewer juniors, not because seniors are being replaced, but because managers believe they can get the same output with fewer people. Agentic coding tools reinforce that belief by offloading the most time-consuming but low-complexity work. Tests, boilerplate, CRUD, glue code, migrations, and similar tasks. Work that isn’t conceptually hard, just expensive in hours.
So yes, the market shifts, but mostly at the junior end. Fewer entry-level hires, higher expectations for those who are hired, and more leverage given to experienced developers who can supervise, correct, and integrate what these tools produce.
What these systems cannot replace is senior judgment. You still need humans to make strategic decisions about architecture, business alignment, go or no-go calls, long-term maintenance costs, risk assessment, and deciding what not to build. That is not a coding problem. It is a systems, organizational, and economic problem.
Agentic coding is good at execution within a frame. Seniors are valuable because they define the frame, understand the implications, and are accountable for the outcome. Until these systems can reason about incentives, constraints, and second-order effects across technical and business domains, they are not replacing seniors. They are amplifying them.
The real change is not “AI replaces developers.” It is that the bar for being useful as a developer keeps moving up.
Business quacks being forever bamboozled because turns out implementation is the only thing that matters and hacker culture outlived every single promise to eradicate hacker culture.
This is the best explanation of (my take on) this I've seen so far.
On top of the article's excellent breakdown of what is happening, I think it's important to note a couple of driving factors about why (I posit) it is happening:
First, and this is touched upon in the OP but I think could be made more explicit, a lot of people who bemoan the existence of software development as a discipline see it as a morass of incidental complexity. This is significantly an instance of Chesterton's Fence. Yes, there certainly is incidental complexity in software development, or at least complexity that is incidental at the level of abstraction that most corporate software lives at. But as a discipline, we're pretty good at eliminating it when we find it, though it sometimes takes a while — but the speed with which we iterate means we eliminate it a lot faster than most other disciplines. A lot of the complexity that remains is actually irreducible, or at least we don't yet know how to reduce it. A case in point: programming language syntax. To the outsider, the syntax of modern programming languages, where the commas go, whether whitespace means anything, how angle brackets are parsed, looks to the uninitiated like a jumble of arcane nonsense that must be memorized in order to start really solving problems, and indeed it's a real barrier to entry that non-developers, budding developers, and sometimes seasoned developers have to contend with. But it's also (a selection of competing frontiers of) the best language we have, after many generations of rationalistic and empirical refinement, for humans to unambiguously specify what they mean at the semantic level of software development as it stands! For a long time now we haven't been constrained in the domain of programming language syntax by the complexity or performance of parser implementations. Instead, modern programming languages tend toward simpler formal grammars because they make it easier for _humans_ to understand what's going on when reading the code. AI tools promise to (amongst other things; don't come at me AI enthusiasts!) replace programming language syntax with natural language. But actually natural language is a terrible syntax for clearly and unambiguously conveying intent! If you want a more venerable example, just look at mathematical syntax, a language that has never been constrained by computer implementation but was developed by humans for humans to read and write their meaning in subtle domains efficiently and effectively. Mathematicians started with natural language and, through a long process of iteration, came to modern-day mathematical syntax. There's no push to replace mathematical syntax with natural language because, even though that would definitely make some parts of the mathematical process easier, we've discovered through hard experience that it makes the process as a whole much harder.
Second, humans (as a gestalt, not necessarily as individuals) always operate at the maximum feasible level of complexity, because there are benefits to be extracted from the higher complexity levels and if we are operating below our maximum complexity budget we're leaving those benefits on the table. From time to time we really do manage to hop up the ladder of abstraction, at least as far as mainstream development goes. But the complexity budget we save by no longer needing to worry about the details we've abstracted over immediately gets reallocated to the upper abstraction levels, providing things like development velocity, correctness guarantees, or UX sophistication. This implies that the sum total of complexity involved in software development will always remain roughly constant. This is of course a win, as we can produce more/better software (assuming we really have abstracted over those low-level details and they're not waiting for the right time to leak through into our nice clean abstraction layer and bite us…), but as a process it will never reduce the total amount of ‘software development’ work to be done, whatever kinds of complexity that may come to comprise. In fact, anecdotally it seems to be subject to some kind of Braess' paradox: the more software we build, the more our society runs on software, the higher the demand for software becomes. If you think about it, this is actually quite a natural consequence of the ‘constant complexity budget’ idea. As we know, software is made of decisions (https://siderea.dreamwidth.org/1219758.html), and the more ‘manual’ labour we free up at the bottom of the stack the more we free up complexity budget to be spent on the high-level decisions at the top. But there's no cap on decision-making! If you ever find yourself with spare complexity budget left over after making all your decisions you can always use it to make decisions about how you make decisions, ad infinitum, and yesterday's high-level decisions become today's menial labour. The only way out of that cycle is to develop intelligences (software, hardware, wetware…) that can not only reason better at a particular level of abstraction than humans but also climb the ladder faster than humanity as a whole — singularity, to use a slightly out-of-vogue term. If we as a species fall off the bottom of the complexity window then there will no longer be a productivity-driven incentive to ideate, though I rather look forward to a luxury-goods market of all-organic artisanal ideas :)
I don't even think that "singularity-level coding agents" get us there. A big part of engineering is working with PMs, working with management, working across teams, working with users, to help distill their disparate wants and needs down into a coherent and usable system.
Knowing when to push back, when to trim down a requirement, when to replace a requirement with something slightly different, when to expand a requirement because you're aware of multiple distinct use cases to which it could apply, or even a new requirement that's interesting enough that it might warrant updating your "vision" for the product itself: that's the real engineering work that even a "singularity-level coding agent" alone could not replace.
An AI agent almost universally says "yes" to everything. They have to! If OpenAI starts selling tools that refuse to do what you tell them, who would ever buy them? And maybe that's the fundamental distinction. Something that says "yes" to everything isn't a partner, it's a tool, and a tool can't replace a partner by itself.
I think that's exactly an example of climbing the abstraction ladder. An agent that's incapable of reframing the current context, given a bad task, will try its best to complete it. An agent capable of generalizing to an overarching goal can figure out when the current objective is at odds with the more important goal.
You're correct in that these aren't really ‘coding agents’ any more, though. Any more than software developers are!
Not just the abstraction ladder though. Also the situational awareness ladder, the functionality ladder, and most importantly the trust ladder.
I can kind of trust the thing to make code changes because the task is fairly well-defined, and there are compile errors, unit tests, code reviews, and other gating factors to catch mistakes. As you move up the abstraction ladder though, how do I know that this thing is actually making sound decisions versus spitting out well-formatted AIorrhea?
At the very least, they need additional functionality to sit in on and contribute to meetings, write up docs and comment threads, ping relevant people on chat when something changes, and set up meetings to resolve conflicts or uncertainties, and generally understand their role, the people they work with and their roles, levels, and idiosyncrasies, the relative importance and idiosyncrasies of different partners, the exceptions for supposed invariants and why they exist and what it implies and when they shouldn't be used, when to escalate vs when to decide vs when to defer vs when to chew on it for a few days as it's doing other things, etc.
For example, say you have an authz system and you've got three partners requesting three different features, the combination of which would create an easily identifiable and easily attackable authz back door. Unless you specifically ask AI to look for this, it'll happily implement those three features and sink your company. You can't fault it: it did everything you asked. You just trusted it with an implicit requirement that it didn't meet. It wasn't "situationally aware" enough to read between the lines there. What you really want is something that would preemptively identify the conflicts, schedule meetings with the different parties, get a better understanding of what each request is trying to unblock, and ideally distill everything down into a single feature that unblocks them all. You can't just move up the abstraction ladder without moving up all those other ladders as well.
Maybe that's possible someday, but right now they're still just okay coders with no understanding of anything beyond the task you just gave them to do. That's fine for single-person hobby projects, but it'll be a while before we see them replacing engineers in the business world.
Well probably we'd want a person who really gets the AI, as they'll have a talent for prompting it well.
Meaning: knows how to talk to computers better than other people.
So a programmer then...
I think it's not that people are stupid. I think there's actually a glee behind the claims AI will put devs out of work - like they feel good about the idea of hurting them, rather than being driven by dispassionate logic.
Outside of SV the thought of More Tech being the answer to ever greater things is met with great skepticism these days. It's not that people hate engineers, and most people are content to hold their nose while the mag7 make 401k go up, but people are sick of Big Tech. Like it or not, the Musks, Karps, Thiels, Bezos's have a lot to do with that.
Not imputing that to you, but it seems like they are people out there that believe money is all that matters. The map with the richest details won't save anyone in a territory that was turned into a wasteland unable to produce a single apple on the whole land.
Yes, but thats because Capitalism is mostly built of the idea of fungibility. So yeah, Americans have told themselves for over a century, whatever they need, just substitute money and you can get it eventually. All other things aside, that's a pretty toxic way if not downright psychotic, way to reframe your relationship with society and other people.
No high paid manager wants to learn that their visionary thinking was just the last iteration of the underpants gnome meme.
Some things sound good at first but unfortunately are not that easy to actually do
Devs are where the project meets reality in general, and this is what I always try to explain to people. And it's the same with construction, by the way. Pictures and blueprints are nice but sooner or later you're going to need someone digging around in the dirt.
Some people just see it as a cost, one "tech" startup I worked at I got this lengthy pitch from a sales exec that they shouldn't have a software team at all, that we'd never be able to build anything useful without spending millions and that money would be better-spent on the sales team, although they'd have nothing to sell lmfao. And the real laugh was the dev team was heavily subsidized by R&D grants anyway.
Even that is the wrong question. The whole promise of the stock market, of AI is that you can "run companies" by just owning shares and knowing nothing at all. I think that is what "leaders" hope to achieve. It's a slightly more dressed get-rich-quick scheme.
Invest $1000 into AI, have a $1000000 company in a month. That's the dream they're selling, at least until they have enough investment.
It of course becomes "oh, sorry, we happen to have taken the only huge business for ourselves. Is your kidney now for sale?"
If these things can ever actually think and understand a codebase this mindset makes sense, but as of now it's a short-sighted way to work. The quality of the output is usually not great, and in some cases terrible. If you're just blindly accepting code with no review, eventually things are going to implode, and the AI is more limited than you are in understanding why. It's not going to save you in it's current form.
The reason those things matter in a traditional project is because the previous developers fucked up, and the product is now crashing and leaking money and clients like a sinking Titanic.
With all these AIs chaining and prompting eachother, we're approaching the point where some unlucky person is going to ask an AI something and it will consume all the energy in the universe trying to compute the answer.
The day you successfully implemented your solution with a prompt, you solution is valued at the cost of a prompt.
There is no value to anything easily achieved by generative tools anymore.
Now it is in either:
a. generative technology but requiring substantial amount of coordination, curation, compute power.
b. substantial amount of data.
c. scarce intelectual human work.
And scarce but non intellectually demanding human work was dropped from the list of valuable things.
LLMs are a box where the input has to be generated by someone/something, but also the output has to be verified somehow (because, like humans, it isn't always correct). So you either need a human at "both ends", or some very clever AI filling those roles.
But I think the human doing those things probably needs slightly different skills and experience than the average legacy developer.
A few observations from the current tech + services market:
Service-led companies are doing relatively better right now. Lower costs, smaller teams, and a lot of “good enough” duct-tape solutions are shipping fast.
Fewer developers are needed to deliver the same output. Mature frameworks, cloud, and AI have quietly changed the baseline productivity.
And yet, these companies still struggle to hire and retain people. Not because talent doesn’t exist, but because they want people who are immediately useful, adaptable, and can operate in messy environments.
Retention is hard when work is rushed, ownership is limited, and growth paths are unclear. People leave as soon as they find slightly better clarity or stability.
On the economy: it doesn’t feel like a crash, more like a slow grind. Capital is cautious. Hiring is defensive. Every role needs justification.
In this environment, it’s a good time for “hackers” — not security hackers, but people who can glue systems together, work with constraints, ship fast, and move without perfect information.
Comfort-driven careers are struggling. Leverage-driven careers are compounding.
Curious to see how others are experiencing this shift.
Let’s not forget that we are just now recovering from the market corrections of the pandemic. Pandemic level tech industry hiring was insane and many of those companies who later held layoffs were just sending the growth line back to where it should be.
I think pressure to ship is always there. I don’t know if that’s intensifying or not. I can understand where managers and executives think AI = magical work faster juice, but I imagine those expectations will hit their correction point at some time.
I think that programming as a job has already changed. Because it is hard for most people to tell the difference between someone who actually has programming skills and experience versus someone who has some technical ingenuity but has only ever used AI to program for them.
Now the expectation from some executives or high level managers is that managers and employees will create custom software for their own departments with minimal software development costs. They can do this using AI tools, often with minimal or no help from software engineers.
Its not quite the equivalent of having software developed entirely by software engineers, but it can be a significant step up from what you typically get from Excel.
I have a pretty radical view that the leading edge of this stuff has been moving much faster than most people realize:
2024: AI-enhanced workflows automating specific tasks
2026: the AI Employee emerges -- robust memory, voice interface, multiple tasks, computer and browser use. They manage their own instructions, tools and context
2027: Autonomous AI Companies become viable. AI CEO creates and manages objectives and AI employees
Note that we have had the AI Employee and AI Organization for awhile in different somewhat weak forms. But in the next 18 months or so as the model and tooling abilities continue to improve, they will probably be viable for a growing number of business roles and businesses.
The details are what stops it from working in every form it's been tried.
You cannot escape the details. You must engage with them and solve them directly, meticulously. It's messy, it's extremely complicated and it's just plain hard.
There is no level of abstraction that saves you from this, because the last level is simply things happening in the world in the way you want them to, and it's really really complicated to engineer that to happen.
I think this is evident by looking at the extreme case. There are plenty of companies with software engineers who truly can turn instructions articulated in plain language into software. But you see lots of these not being successful for the simple reason that those providing the instructions are not sufficiently engaged with the detail, or have the detail wrong. Conversely, for the most successful companies the opposite is true.
Going back and forth on the detail in requirements and mapping it to the details of technical implementation (and then dealing with the endless emergent details of actually running the thing in production on real hardware on the real internet with real messy users actually using it) is 90% of what’s hard about professional software engineering.
It’s also what separates professional engineering from things like the toy leetcode problems on a whiteboard that many of us love to hate. Those are hard in a different way, but LLMs can do them on their own better than humans now. Not so for the other stuff.
[0] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
This repeats over and over. There are no big problems, there are only a bunch of little problems that accumulate. As engineers, scientists, researchers, etc our literal job is to break down problems into many smaller problems and then solve them one at a time. And again, we only solve them to the good enough level, as perfection doesn't exist. The problems we solve never were a single problem, but many many smaller ones.
I think the problem is we want to avoid depth. It's difficult! It's frustrating. It would be great if depth were never needed. But everything is simple until you actually have to deal with it.
I see no reason why this wouldn't be achievable. Having lived most of my life in the land of details, country of software development, I'm acutely aware 90% of effort goes into giving precise answers to irrelevant questions. In almost all problems I've worked on, whether at tactical or strategic scale, there's either a single family of answers, or a broad class of different ones. However, no programming language supports the notion of "just do the usual" or "I don't care, pick whatever, we can revisit the topic once the choice matters". Either way, I'm forced to pick and spell out a concrete answer myself, by hand. Fortunately, LLMs are slowly starting to help with that.
In other words, it all looks easy in hindsight only.
You can't just know right off the back. Doing so contradicts the premise. You cannot determine if a detail isn't important unless you get detailed. If you only care about a few grains of sand in a bucket you still have to search through a bucket of sand for those few grains
Programming languages already take lots of decisions implicitly and explicitly on one’s behalf. But there are way more details of course, which are then handled by frameworks, libraries, etc. Surely at some point, one has to take a decision? Your underlying point is about avoiding boilerplate, and LLMs definitely help with that already - to a larger extent than cookie cutter repos, but none of them can solve IRL details that are found through rigorous understanding of the problem and exploration via user interviews, business challenges, etc.
The problem with LLMs is that it is not only the "irrelevant details" that are hallucinated. It is also "very relevant details" which either make the whole system inconsistent or full of security vulnerabilities.
But if it's security critical? You'd better be touching every single line of code and you'd better fully understand what each one does, what could go wrong in the wild, how the approach taken compares to best practices, and how an attacker might go about trying to exploit what you've authored. Anything less is negligence on your part.
Which seems like an apt analogy for software. I see people all the time who build systems and they don't care about the details. The results are always mediocre.
I think this is a major point people do not mention enough during these debates on "AI vs Developers": The business/stakeholder side is completely fine with average and mediocre solutions as long as those solutions are delivered quickly and priced competitively. They will gladly use a vibecoded solution if the solution kinda sorta mostly works. They don't care about security, performance or completeness... such things are to be handled when/if they reach the user/customer in significant numbers. So while we (the devs) are thinking back to all the instances we used gpt/grok/claude/.. and not seeing how the business could possibly arrive to our solutions just with AI and wihout us in the loop... the business doesn't know any of the details nor does it care. When it comes to anything IT related, your typical business doesn't know what it doesn't know, which makes it easy to fire employees/contractors for redundancy first (because we have AI now) and ask questions later (uhh... because we have AI now).
— Richard Guindon
This is certainly true of writing software.
That said, I am assuredly enjoying trying out artificial writing and research assistants.
Yes, it has nothing to do with dev specifically, dev "just" happens to be how to do so while being text based, which is the medium of LLMs. What also "just" happens to be convenient is that dev is expensive, so if a new technology might help to make something possible and/or make it unexpensive, it's potentially a market.
Now pesky details like actual implementation, who got time for that, it's just few more trillions away.
Speech recognition was a joke for half a century until it wasn’t. Machine translation was mocked for decades until it quietly became infrastructure. Autopilot existed forever before it crossed the threshold where it actually mattered. Voice assistants were novelty toys until they weren’t. At the same time, some technologies still haven’t crossed the line. Full self driving. General robotics. Fusion. History does not point one way. It fans out.
That is why invoking history as a veto is lazy. It is a crutch people reach for when it’s convenient. “This happened before, therefore that’s what’s happening now,” while conveniently ignoring that the opposite also happened many times. Either outcome is possible. History alone does not privilege the comforting one.
If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue. Output per developer is rising. Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals. The slope matters more than anecdotes. The relevant question is not whether this resembles CASE tools. It’s what the world looks like if this curve runs for five more years. The conclusion is not subtle.
The reason this argument keeps reappearing has little to do with tools and everything to do with identity. People do not merely program. They are programmers. “Software engineer” is a marker of intelligence, competence, and earned status. It is modern social rank. When that rank is threatened, the debate stops being about productivity and becomes about self preservation.
Once identity is on the line, logic degrades fast. Humans are not wired to update beliefs when status is threatened. They are wired to defend narratives. Evidence is filtered. Uncertainty is inflated selectively. Weak counterexamples are treated as decisive. Strong signals are waved away as hype. Arguments that sound empirical are adopted because they function as armor. “This happened before” is appealing precisely because it avoids engaging with present reality.
This is how self delusion works. People do not say “this scares me.” They say “it’s impossible.” They do not say “this threatens my role.” They say “the hard part is still understanding requirements.” They do not say “I don’t want this to be true.” They say “history proves it won’t happen.” Rationality becomes a costume worn by fear. Evolution optimized us for social survival, not for calmly accepting trendlines that imply loss of status.
That psychology leaks straight into the title. Calling this a “recurring dream” is projection. For developers, this is not a dream. It is a nightmare. And nightmares are easier to cope with if you pretend they belong to someone else. Reframe the threat as another person’s delusion, then congratulate yourself for being clear eyed. But the delusion runs the other way. The people insisting nothing fundamental is changing are the ones trying to sleep through the alarm.
The uncomfortable truth is that many people do not stand to benefit from this transition. Pretending otherwise does not make it false. Dismissing it as a dream does not make it disappear. If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even when the destination is not one you want to visit.
My dude, I just want to point out that there is no evidence of any of this, and a lot of evidence of the opposite.
> If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even
You first, lol.
> This is how self delusion works
Yeah, about that...
“You first, lol” isn’t a rebuttal either. It’s an evasion. The claim was not “the labor market has already flipped.” The claim was that AI-assisted coding has changed individual leverage, and that extrapolating that change leads somewhere uncomfortable. Demanding proof that the future has already happened is a category error, not a clever retort.
And yes, the self-delusion paragraph clearly hit, because instead of addressing it, you waved vaguely and disengaged. That’s a tell. When identity is involved, people stop arguing substance and start contesting whether evidence is allowed to count yet.
Now let’s talk about evidence, using sources who are not selling LLMs, not building them, and not financially dependent on hype.
Martin Fowler has explicitly written about AI-assisted development changing how code is produced, reviewed, and maintained, noting that large portions of what used to be hands-on programmer labor are being absorbed by tools. His framing is cautious, but clear: AI is collapsing layers of work, not merely speeding up typing. That is labor substitution at the task level.
Kent Beck, one of the most conservative voices in software engineering, has publicly stated that AI pair-programming fundamentally changes how much code a single developer can responsibly produce, and that this alters team dynamics and staffing assumptions. Beck is not bullish by temperament. When he says the workflow has changed, he means it.
Bjarne Stroustrup has explicitly acknowledged that AI-assisted code generation changes the economics of programming by automating work that previously required skilled human attention, while also warning about misuse. The warning matters, but the admission matters more: the work is being automated.
Microsoft Research, which is structurally separated from product marketing, has published peer-reviewed studies showing that developers using AI coding assistants complete tasks significantly faster and with lower cognitive load. These papers are not written by executives. They are written by researchers whose credibility depends on methodological restraint, not hype.
GitHub Copilot’s controlled studies, authored with external researchers, show measurable increases in task completion speed, reduced time-to-first-solution, and increased throughput. You can argue about long-term quality. You cannot argue “no evidence” without pretending these studies don’t exist.
Then there is plain, boring observation.
AI-assisted coding is directly eliminating discrete units of programmer labor: boilerplate, CRUD endpoints, test scaffolding, migrations, refactors, first drafts, glue code. These were not side chores. They were how junior and mid-level engineers justified headcount. That work is disappearing as a category, which is why junior hiring is down and why backfills quietly don’t happen.
You don’t need mass layoffs to identify a structural shift. Structural change shows up first in roles that stop being hired, positions that don’t get replaced, and how much one person can ship. Waiting for headline employment numbers before acknowledging the trend is mistaking lagging indicators for evidence.
If you want to argue that AI-assisted coding will not compress labor this time, that’s a valid position. But then you need to explain why higher individual leverage won’t reduce team size. Why faster idea-to-code cycles won’t eliminate roles. Why organizations will keep paying for surplus engineering labor when fewer people can deliver the same output.
But “there is no evidence” isn’t a counterargument. It’s denial wearing the aesthetic of rigor.
> If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue.
Wait, so we can infer the future from "trendlines", but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias...
I would argue that data points that are barely a few years old, and obscured by an unprecedented hype cycle and gold rush, are not reliable predictors of anything. The safe approach would be to wait for the market to settle, before placing any bets on the future.
> Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals.
What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
> The reason this argument keeps reappearing has little to do with tools and everything to do with identity.
Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
I don't get how anyone can speak about trends and what's currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
If past events can be dismissed as “noise,” then so can selectively chosen counterexamples. Either historical outcomes are legitimate inputs into a broader signal, or no isolated datapoint deserves special treatment. You cannot appeal to trendlines while arbitrarily discarding the very history that defines them without committing selection bias.
When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.
>What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
These are legitimate questions, and they are all speculative. My expectation is that code quality will decline while simultaneously becoming less relevant. As LLMs ingest and reason over ever larger bodies of software, human oriented notions of cleanliness and maintainability matter less. LLMs are far less constrained by disorder than humans are.
>Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
The flaws are obvious. So obvious that repeatedly pointing them out is like warning that airplanes can crash while ignoring that aviation safety has improved to the point where you are far more likely to die in a car than in a metal tube moving at 500 mph.
Everyone knows LLMs hallucinate. That is not contested. What matters is the direction of travel. The trendline is clear. Just as early aviation was dangerous but steadily improved, this technology is getting better month by month.
That is the real disagreement. Critics focus on present day limitations. Proponents focus on the trajectory. One side freezes the system in time; the other extrapolates forward.
>I don’t get how anyone can speak about trends and what’s currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
Because many skeptics are ignoring what is directly observable. You can watch AI generate ultra complex, domain specific systems that have never existed before, in real time, and still hear someone dismiss it entirely because it failed a prompt last Tuesday.
Repeating the limitations is not analysis. Everyone who is not a skeptic already understands them and has factored them in. What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation.
At that point, the disagreement stops being about evidence and starts looking like bias.
There is one painfully obvious, undeniable historical trend: making programmer work easier increases the number of programmers. I would argue a modern developer is 1000x more effective than one working in the times of punch cards - yet we have roughly 1000x more software developers than back then.
I'm not an AI skeptic by any means, and use it everyday at my job where I am gainfully employed to develop production software used by paying customers. The overwhelming consensus among those similar to me (I've put down all of these qualifiers very intentionally) is that the currently existing modalities of AI tools are a massive productivity boost mostly for the "typing" part of software (yes, I use the latest SOTA tools, Claude Opus 4.5 thinking, blah, blah, so do most of my colleagues). But the "typing" part hasn't been the hard part for a while already.
You could argue that there is a "step change" coming in the capabilities of AI models, which will entirely replace developers (so software can be "willed into existence", as elegantly put by OP), but we are no closer to that point now than we were in December 2022. All the success of AI tools in actual, real-world software has been in tools specifically design to assist existing, working, competent developers (e.g. Cursor, Claude Code), and the tools which have positioned themselves to replace them have failed (Devin).
I'm yet to be convinced of this. I keep hearing it, but every time I look at the results they're basically garbage.
I think LLMs are useful tools, but I haven't seen anything convincing that they will be able to replace even junior developers any time soon.
COBOL was supposed to let managers write programs. VB let business users make apps. Squarespace killed the need for web developers. And now AI.
What actually happens: the tooling lowers the barrier to entry, way more people try to build things, and then those same people need actual developers when they hit the edges of what the tool can do. The total surface area of "stuff that needs building" keeps expanding.
The developers who get displaced are the ones doing purely mechanical work that was already well-specified. But the job of understanding what to build in the first place, or debugging why the automated thing isn't doing what you expected - that's still there. Usually there's more of it.
When LLMs first showed up publicly it was a huge leap forward, and people assumed it would continue improving at the rate they had seen but it hasn't.
How do you know that? For tech products most of the users are also technically literate and can easily use Claude Code or whatever tool we are using. They easily tell CC specifically what they need. Unless you create social media apps or bank apps, the customers are pretty tech savvy.
With AI, probably you don’t need 95% of the programmers who do that job anyway. Physicists who know the algorithm much better can use AI to implement a majority of the system and maybe you can have a software engineer orchestrate the program in the cloud or supercomputer or something but probably not even that.
Okay, the idea I was trying to get across before I rambled was that many times the customer knows what they want very well and much better than the software engineer.
Maybe you already understood this, but many of the "AI boosters" you refer to genuinely believe we have "seen the start of it".
Or at least they appear to believe it.
But where is the S curves for programmers at?
Have you ever paid for software? I have, many times, for things I could build myself
Building it yourself as a business means you need to staff people, taking them away from other work. You need to maintain it.
Run even conservative numbers for it and you'll see it's pretty damn expensive if humans need to be involved. It's not the norm that that's going to be good ROI
No matter how good these tools get, they can't read your mind. It takes real work to get something production ready and polished out of them
At my company, we call them technical business analysts. Their director was a developer for 10 years, and then skyrocket through the ranks in that department.
AI usage in coding will not stop ofc but normal people vibe coding production-ready apps is a pipedream that has many issues independent of how good the AI/tools are.
https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...
I'm not sure how well that would work in practice, nor why such an approach is not used more often than it is. But yes the point is that then some humans would have to write such tests as code to pass to the AI to implement. So we would still need human coders to write those unit-tests/specs. Only humans can tell AI what humans want it to do.
AI can code because the user of AI can code.
Debbie from accounting doesn't have a clue what an int is
( variation of .. "Ours is not to reason why, ours is but to do and die" )
Just today, I needed a basic web application, the sort of which I can easily get off the shelf from several existing vendors.
I started down the path of building my own, because, well, that's just what I do, then after about 30 minutes decided to use an existing product.
I have hunch that, even with AI making programming so much easier, there is still a market for buying pre-written solutions.
Further, I would speculate that this remains true of other areas of AI content generation. For example, even if it's trivially easy to have AI generate music per your specifications, it's even easier to just play something that someone else already made (be it human-generated or AI).
What if AI brings the China situation to the entire world? Would the mentality shift? You seem to be basing it on the cost benefit calculations of companies today. Yes, SASS makes sense when you have developers (many of which could be mediocre) who are so expensive that it makes more sense to just pay a company who has already gone through the work of finding good developers and spend the capital to build a decent version of what you are looking for vs a scenario where the cost of a good developer has fallen dramatically and so now you can produce the same results with far less money (a cheap developer(does not matter if they are good or mediocre) guiding an AI). That cheap developer does not even have to be in the US.
At the high end, china pays SWEs better than South Korea, Japan, Taiwan, India, and much Europe, so they attract developers from those locations. At the low end, they have a ton of low to mid-tier developers from 3rd tier+ institutions that can hack well enough. It is sort of like India: skilled people with credentials to back it up can do well, but there are tons of lower skilled people with some ability that are relatively cheap and useful.
China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.
I hear those other Asian countries are just like China in terms of adoption.
>China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.
It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the country's "stack" is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.
> It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the stack is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.
China lacks those big NVIDIA GPUs that were sanctioned and now export tariffed, so going with lower models that could run on hardware they could access was the best move for them. This could either work out (local LLM computing is the future, and China is ahead of the game by circumstance) or maybe it doesn't work out (big server-based LLMs are the future and China is behind the curve). I think the Chinese government would have actually preferred centralization control, and censorship, but the current situation is that the Chinese models are the most uncensored you can get these days (with some fine tuning, they are heavily used in the adult entertainment industry...haha socialist values).
I wouldn't trust the Chinese government to not do Skynet if they get the chance, but Chinese entrepreneurs are good at getting things done and avoiding government interference. Basically, the world is just getting lucky by a bunch of circumstances ATM.
I would agree that if the scenario is a business, to either buy an off-the-shelf software solution or pay a small team to develop it, and if the off-the-shelf solution was priced high enough, then having it custom built with AI (maybe still with a tiny number of developers involved) could end up being the better choice. Really all depends on the details.
Historically, it would seem that often lowering the amount of people needed to produce a good is precisely what makes it cheaper.
So it’s not hard to imagine a world where AI tools make expert software developers significantly more productive while enabling other workers to use their own little programs and automations on their own jobs.
In such a world, the number of “lines of code” being used would be much greater that today.
But it is not clear to me that the amount of people working full time as “software developers“ would be larger as well.
Not automatically, no.
How it affects employment depends on the shapes of the relevant supply/demand curves, and I don't think those are possible to know well for things like this.
For the world as a whole, it should be a very positive thing if creating usable software becomes an order of magnitude cheaper, and millions of smart people become available for other work.
Counter argument - if what you say is true, we will have a lot more custom & personalized software and the tech stacks behind those may be even more complicated than they currently are because we're now wanting to add LLMs that can talk to our APIs. We might also be adding multiple LLMs to our back ends to do things as well. Maybe we're replacing 10 but now someone has to manage that LLM infrastructure as well.
My opinion will change by tomorrow but I could see more people building software that are currently experts in other domains. I can also see software engineers focusing more on keeping the new more complicated architecture being built from falling apart & trying to enforce tech standards. Our roles may become more infra & security. Less features, more stability & security.
hmm outsourcing doesn't contradict Jevon's paradox ?
Doesn't mean it will happen this time (i.e. if AI truly becomes what was promised) and actually it's not likely it will!
> AI changes how developers work rather than eliminating the need for their judgment. The complexity remains. Someone must understand the business problem, evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve.
What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?
It might be not enough by itself, but it shows that something has changed in comparison with the 70-odd previous years.
Meaningful consequences of mistakes in software don't manifest themselves through compilation errors, but through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.
What previously needed five devs, might be doable by just two or three.
In the article, he says there are no shortcuts to this part of the job. That does not seem likely to be true. The research and thinking through the solution goes much faster using AI, compared to before where I had to look up everything.
In some cases, agentic AI tools are already able to ask the questions about architecture and edge cases, and you only need to select which option you want the agent to implement.
There are shortcuts.
Then the question becomes how large the productivity boost will be and whether the idea that demand will just scale with productivity is realistic.
I think you are basing your reasoning on the current generation of models. But if future generation will be able to do everything you've listed above, what work will be there left for developers? I'm not saying that we will ever get such models, just that when they appear, they will actually displace developers and not create more jobs for them. The business problem will be specified by business people, and even if they get it wrong it won't matter because iteration will be quick and cheap.
> What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?
The entire argument is based on assumption that models won't get better and will never be able to do things you've listed! But once they become capable of these things - what work will be there for developers?
It's extremely hard to define "human-level intelligence" but I think we can all agree that the definition of it changes with the tools available to humans. Humans seem remarkably suited to adapt to operate at the edges of what the technology of time can do.
I mean they are promising AGI.
Of course in that case it will not happen this time. However, in that case software dev getting automated would concern me less than the risk of getting turned into some manner of office supply.
Imo as long as we do NOT have AGI, software-focused professional will stay a viable career path. Someone will have to design software systems on some level of abstraction.
ooohhh I think I missed the intent of the statement... well done!
And overall fewer farmers with more technological skill sets than back in the dustbowl days.
Here (Western Australia) the increase in average farm size by product can be plotted over time along with the fall in numbers working that land.
I get your point, hope you get mine: we have less legal entities operating as "farms". If vibe coding makes you a "developer", working on a farm in an operating capacity makes you a "farmer". You might profess to be a biologist / agronomist, I'm sure some owners are, but doesn't matter to me whether you're the owner or not.
The numbers of nonsupervisory operators in farming activities have decreased using the traditional definitions.
I think it’s a reasonable hypothesis that the amount of software written if it was, say, 20% of its present cost to write it, would be at least 5x what we currently produce.
That's not the case for IT where entry barrier has been reduced to nothing.
The craftsman who were forced to go to the factory were not paid more or better off.
There is not going to be more software engineers in the future than there is now, at least not in what would be recognizable as software engineering today. I could see there being vastly more startups with founders as agent orchestrators and many more CTO jobs. There is no way there is many more 2026 version of software engineering jobs at S&P 500 companies in the future. That seems borderline delusional to me.
The first line made me laugh out loud because it made me think of an old boss who I enjoyed working with but could never really do coding. This boss was a rockstar at the business side of things and having worked with ABAP in my career, I couldn't ever imagine said person writing code in COBOL.
However the second line got me thinking. Yes VB let business users make apps(I made so many forms for fun). But it reminded me about how much stuff my boss got done in Excel. Was a total wizard.
You have a good point in that the stuff keeps expanding because while not all bosses will pick up the new stack many ambitious ones will. I'm sure it was the case during COBOL, during VB and is certainly the case when Excel hit the scene and I suspect that a lot of people will get stuff done with AI that devs used to do.
>But the job of understanding what to build in the first place, or debugging why the automated thing isn't doing what you expected - that's still there. Usually there's more of it.
Honestly this is the million dollar question that is actually being argued back and forth in all these threads. Given a set of requirements, can AI + a somewhat technically competent business person solve all the things a dev used to take care of? Its possible, im wondering that my boss who couldn't even tell the difference between React and Flask could in theory...possibly with an AI with a large enough context overcomes these mental model limitations. Would be an interesting experiment for companies to try out.
I find SQL becomes a "stepping stone" to level up for people who live and breathe Excel (for obvious reasons).
Now was SQL considered some sort of tool to help business people do more of what coders could do? Not too sure about that. Maybe Access was that tool and it just didn't stick for various reasons.
And that hits the offshoring companies in India and similar countries probably the most, because those can generally only do their jobs well if everything has been specified to the detail.
you mean "created", past tense. You're basically arguing it's impossible for technical improvements to reduce the number of programmers in the world, ever. The idea that only humans will ever be able to debug code or interpret non-technical user needs seems questionable to me.
Also the percentage of adults working has been dropping for a while. Retired used to be a tiny fraction of the population that’s no longer the case, people spend more time being educated or in prison etc.
Overall people are seeing a higher standard of living while doing less work.
There are lots of negative reasons for this that aren’t efficiency. Aging demographics. Poor education. Increasing complexity leaves people behind.
So, yes, reasons other than efficiency explain why people aren't working, as well why there are still poor people.
Now we can set arbitrary thresholds for what standard of living every American should have but even knowing people on SNAP it’s not that low.
I certainly hope so, but it depends on whether we will have more demand for such problems. AI can code out a complex project by itself because we humans do not care about many details. When we marvel that AI generates a working dashboard for us, we are really accepting that someone else has created a dashboard that meets our expectation. The layout, the color, the aesthetics, the way it interacts, the time series algorithms, and etc. We don't care, as it does better than we imagined. This, of course, is inevitable, as many of us do spend enormous time implementing what other people have done. Fortunately or unfortunately, it is very hard to human to repeat other people's work correctly, but it's a breeze for AI. The corollary is that AI will replace a lot of demand on software developers, if we don't have big enough problems to solve -- in the past 20 years we have internet, cloud, mobile, and machine learning. All big trends that require millions and millions of brilliant minds. Are we going to have the same luck in the coming years, I'm not so sure.
At some point the low hanging automation fruit gets tapped out. What can be put online that isnt there already? Which business processes are obviously going to be made an order magnitude more efficient?
Moreover, we've never had more developers and we've exited an anomalous period of extraordinarily low interest rates.
The party might be over.
I was working with developer training for a while some 5-10 years back and already then I was starting to see some signs of an incoming over-saturation, the low interest rates probably masked much of it due to happy go lucky investments sucking up developers.
Low hanging and cheap automation,etc work is quickly dwindling now, especially as development firms are searching out new niches when the big "in-IT" customers aren't buying services inside the industry.
Luckily people will retire and young people probably aren't as bullish about the industry anymore, so we'll probably land in an equilebrium, the question is how long it'll take, because the long tail of things enabled by the mobile/tablet revolution is starting to be claimed.
The job is literally building automation.
There is no equivalent to "working on the assembly line" as an SWE.
>Not so many lower skill line worker jobs in the US any more, though
Because Globalization.
but the actual work of constructing reliable systems from vague user requirements with an essentially unbounded resource (software) will exist
imagine being an engineer educated in multiple instruction sets: when compilers arrive on the scene it sure makes their job easier, but that does not retroactively change their education to suddenly have all the requisite mathematics and domain knowledge of say algorithms and data structures.
what is euphemistically described as a "remaining need for people to design, debug and resolve unexpected behaviors" is basically a lie by omission: the advent of AI does not automatically mean previously representative human workers suddenly will know higher level knowledge in order to do that. it takes education to achieve that, no trivial amount of chatbotting will enable displaced human workers to attain that higher level of consciousness. perhaps it can be attained by designing software that uploads AI skills to humans...
I was imagining companies expanding the features they wanted and was skeptical that would be close to enough, but this makes way more sense
In practice, I see expensive reinvention. Developers debug database corruption after pod restarts without understanding filesystem semantics. They recreate monitoring strategies and networking patterns on top of CNI because they never learned the fundamentals these abstractions are built on. They're not learning faster: they're relearning the same operational lessons at orders of magnitude higher cost, now mediated through layers of YAML.
Each wave of "democratisation" doesn't eliminate specialists. It creates new specialists who must learn both the abstraction and what it's abstracting. We've made expertise more expensive to acquire, not unnecessary.
Excel proves the rule. It's objectively terrible: 30% of genomics papers contain gene name errors from autocorrect, JP Morgan lost $6bn from formula errors, Public Health England lost 16,000 COVID cases hitting row limits. Yet it succeeded at democratisation by accepting catastrophic failures no proper system would tolerate.
The pattern repeats because we want Excel's accessibility with engineering reliability. You can't have both. Either accept disasters for democratisation, or accept that expertise remains required.
90% of people building whatever junk their company needs does not. I learned this lesson the hard way after working at both large and tiny companies. Its the people that remain in the bubble of places like AWS, GCP or people doing hard core research or engineering that have this mentality. Everyone else eventually learns.
>Excel proves the rule. It's objectively terrible: 30% of genomics papers contain gene name errors from autocorrect, JP Morgan lost $6bn from formula errors, Public Health England lost 16,000 COVID cases hitting row limits. Yet it succeeded at democratisation by accepting catastrophic failures no proper system would tolerate.
Excel is the largest development language in the world. Nothing (not Python, VB, Java etc.) can even come close. Why? Because it literally glues the world together. Everything from the Mega Company, to every government agency to even mom & pop Bed & Breakfast operations run on Excel. The least technically competent people can fiddle around with Excel and get real stuff done that end up being critical pathways that a business relies on.
Its hard to quantify but I am putting my stake in the ground: Excel + AI will probably help fix many (but not all) of those issues you talk about.
The issues I’m talking about are: “we can’t debug kernel issues, so we run 40 pods and tune complicated load balancers health-check procedures in order for the service to work well”.
There is no understanding that anything is actually wrong, for they think that it is just the state of the universe, a physical law that prevents whatever issue it is from being resolved. They aren’t even aware that the kernel is the problem, sometimes they’re not even aware that there is a problem, they just run at linear scale because they think they must.
I think you’re just seeing popularity.
The extreme popular and scale of these solutions means more opportunity for problems.
It’s easy to say X is terrible or Y is terrible but the real question is always: compared to what?
If you’re comparing to some hypothetical perfect system that only exists in theory, that’s not useful.
Will insurance policy coverage and premiums change when using non-deterministic software?
It seems like in the early 2000s every tiny company needed a sysadmin, to manage the physical hardware, manage the DB, custom deployment scripts. That particular job is just gone now.
I can implement zero downtime upgrades easily with Kubernetes. No more late-day upgrades and late-night debug sessions because something went wrong, I can commit any time of the day and I can be sure that upgrade will work.
My infrastructure is self-healing. No more crashed app server.
Some engineering tasks are standardized and outsourced to the professional hoster by using managed serviced. I don't need to manage operating system updates and some component updates (including Kubernetes).
My infrastructure can be easily scaled horizontally. Both up and down.
I can commit changes to git to apply them or I can easily revert them. I know the whole history perfectly well.
I would need to reinvent half of Kubernetes before, to enable all of that. I guess big companies just did that. I never had resources for that. So my deployments were not good. They didn't scale, they crashed, they required frequent manual interventions, downtimes were frequent. Kubernetes and other modern approaches allowed small companies to enjoy things they couldn't do before. At the expense of slightly higher devops learning curve.
Kubernetes didn’t democratise operations, it created a new tier of specialists. But what I find interesting is that a lot of that adoption wasn’t driven by necessity. Studies show 60% of hiring managers admit technology trends influence their job postings, whilst 82% of developers believe using trending tech makes them more attractive to employers. This creates a vicious cycle: companies adopt Kubernetes partly because they’re afraid they won’t be able to hire without it, developers learn Kubernetes to stay employable, which reinforces the hiring pressure.
I’ve watched small companies with a few hundred users spin up full K8s clusters when they could run on a handful of VMs. Not because they needed the scale, but because “serious startups use Kubernetes.” Then they spend six months debugging networking instead of shipping features. The abstraction didn’t eliminate expertise, it forced them to learn both Kubernetes and the underlying systems when things inevitably break.
The early 2000s sysadmin managing physical hardware is gone. They’ve been replaced by SREs who need to understand networking, storage, scheduling, plus the Kubernetes control plane, YAML semantics, and operator patterns. We didn’t reduce the expertise required, we added layers on top of it. Which is fine for companies operating at genuine scale, but most of that 95% aren’t Netflix.
Everything was for sure simpler, but also the requirements and expectations were much, much lower. Tech and complexity moved forward with goal posts also moving forward.
Just one example on reliability, I remember popular websites with many thousands if not millions of users would put an "under maintenance" page whenever a major upgrade comes through and sometimes close shop for hours. If the said maintenance goes bad, come tomorrow because they aren't coming up.
Proper HA, backups, monitoring were luxuries for many, and the kind of self-healing, dynamically autoscaled, "cattle not pet" infrastructure that is now trivialized by Kubernetes were sci-fi for most. Today people consider all of this and a lot more as table stakes.
It's easy to shit on cloud and kubernetes and yearn for the simpler Linux-on-a-box days, yet unless expectations somehow revert back 20-30 years, that isn't coming back.
This. In the early 2000s, almost every day after school (3PM ET) Facebook.com was basically unusable. The request would either hang for minutes before responding at 1/10th of the broadband speed at that time, or it would just timeout. And that was completely normal. Also...
- MySpace literally let you inject HTML, CSS, and (unofficially) JavaScript into your profile's freeform text fields
- Between 8-11 PM ("prime time" TV) you could pretty much expect to get randomly disconnected when using dial up Internet. And then you'd need to repeat the arduous sign in dance, waiting for that signature screech that tells you you're connected.
- Every day after school the Internet was basically unusable from any school computer. I remember just trying to hit Google using a computer in the library turning into a 2-5 minute ordeal.
But also and perhaps most importantly, let's not forget: MySpace had personality. Was it tacky? Yes. Was it safe? Well, I don't think a modern web browser would even attempt to render it. But you can't replace the anticipation of clicking on someone's profile and not knowing whether you'll be immediately deafened with loud (blaring) background music and no visible way to stop it.
No matter how much progress we make, as long as reasoning about complex systems is unavoidable, this doesn’t change. We don’t always know what we want, and we can’t always articulate it clearly.
So people building software end up dealing with two problems at once. One is grappling with the intrinsic, irreducible complexity of the system. The other is trying to read the minds of unreliable narrators, including leadership and themselves.
Tools help with the mechanical parts of the job, but they don’t remove the thinking and understanding bottleneck. And since the incentives of leadership, investors, and the people doing the actual work don’t line up, a tug-of-war is the most predictable outcome.
The pattern repeats because the market incentivizes it. AI has been pushed as an omnipotent, all-powerful job-killer by these companies because shareholder value depends on enough people believing in it, not whether the tooling is actually capable. It's telling that folks like Jensen Huang talk about people's negativity towards AI being one of the biggest barriers to advancement, as if they should be immune from scrutiny.
They'd rather try to discredit the naysayers than actually work towards making these products function the way they're being marketed, and once the market wakes up to this reality, it's gonna get really ugly.
Market is not universal gravity, it's just a storefront for social policy.
No political order, no market, no market incentives.
This is why those same mid level managers and C suite people are salivating over AI and mentioning it in every press release.
The reality is that costs are being reduced by replacing US teams with offshore teams. And the layoffs are being spun as a result of AI adoption.
AI tools for software development are here to stay and accelerate in the coming months and years and there will be advances. But cost reductions are largely realized via onshore/offshore replacement.
The remaining onshore teams must absorb much more slack and fixes and in a way end up being more productive.
Hailing from an outsourcing destination I need to ask: to where specifically? We've been laid off all the same. Me and my team spent the second half of 2025 working half time because that's the proposition we were given.
What is this fabled place with an apparent abundance of highly skilled developers? India? They don't make on average much less than we do here - the good ones make more.
My belief is that spending on staff just went down across the board because every company noticed that all the others were doing layoffs, so pressure to compete in the software space is lower. Also all the investor money was spent on datacentres so in a way AI is taking jobs.
So we will reduce headcount in some countries because of things like (perceived) working culture, and increase based on the need to gain goodwill or fulfil contracts from customers.
This can also mean that the type of work outsources can change pretty quickly. We are getting rid of most of the "developers" in India, because places like Vietnam and eastern Europe are now less limited by language, and are much better to work with. At the same time we are inventing and outsourcing other activities to India because of a desire to sell in their market.
There are a lot of counterexamples throughout history.
Like liquid death sells water for a strangely high amount of money - entirely sales / marketing.
International Star Registry gives you a piece of paper and a row in a database that says you own a star.
Many luxury things are just because it's sold by that luxury brand. They are "worth" that amount of money for the status of other people knowing you paid that much for it.
> Many companies aren't selling anything special or are just selling an "idea".
https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD104...
The first electronic computers were programmed by manually re-wiring their circuits. Going from that to being able to encode machine instructions on punchcards did not replace developers. Nor did going from raw machine instructions to assembly code. Nor did going from hand-written assembly to compiled low-level languages like C/FORTRAN. Nor did going from low-level languages to higher-level languages like Java, C++, or Python. Nor did relying on libraries/frameworks for implementing functionality that previously had to be written from scratch each time. Each of these steps freed developers from having to worry about lower-level problems and instead focus on higher-level problems. Mel's intellect is freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
(The thing that distinguishes gen-AI from all the previous examples of increasing abstraction is that those examples are deterministic and often formally verifiable mappings from higher abstraction -> lower abstraction. Gen-AI is neither.)
[0] http://catb.org/jargon/html/story-of-mel.html
Thats not the goal the Anthropic's CEO has. Nor does any other CEO for that matter.
It is what he can deliver.
People do and will talk about replacing developers though.
That's not to say developers haven't been displaced by abstraction; I suspect many of the people responsible for re-wiring the ENIAC were completely out of a job when punchcards hit the scene. But their absence was filled by a greater number of higher-level punchcard-wielding developers.
Recognizing the barriers & modes of failure (which will be a moving target) lets you respond competently when you are called. Raise your hourly rate as needed.
I don't think AI will completely replace these jobs, but it could reduce job numbers by a very large amount.
That's where I find the analogy on thin ice, because somebody has to understand the layers and their transformations.
I’m not saying generative AI meets this standard, but it’s different from what you’re saying.
Now I guess you can read the code an LLM generates, so maybe that layer does exist. But, that's why I don't like the idea of making a programming language for LLMs, by LLMs, that's inscrutable by humans. A lot of those intermediate layers in compilers are designed for humans, with only assembly generation being made for the CPU.
'Decompilers' are work in the machine code direction for human consumption, they can be improved by LLMs.
Militarily, you will want machine code and JS capable systems.
Machine code capablities cover both memory leaks and firmware dumps and negate the requirement of "source" comprehension.
I wanted to +1 you but I don't think I have the karma required.
Something is lost each step of the abstraction ladder we climb. And the latest rung uses natural language which introduces a lot of imprecision/slop, in a way that prior abstractions did not. And, this new technology providing the new abstraction is non-deterministic on top of that.
There's also the quality issue of the output you do get.
I don't think the analogy of the assembly -> C transition people like to use holds water – there are some similarities but LLMs have a lot of downsides.
Again ignoring completely that when you would program vacuum tube computers it was an entirely different type of abstraction than you do with Mosfets for example
I’m finding myself in the position where I can safely ignore any conversation about engineering with anybody who thinks that there is a “right” way to do it or that there’s any kind of ceremony or thinking pattern that needs to stay stable
Those are all artifacts of humans desiring very little variance and things that they’ve even encoded because it takes real energy to have to reconfigure your own internal state model to a new paradigm
For context: we're the creators of ChatBotKit and have been deploying AI agents since the early days (about 2 years ago). These days, there's no doubt our systems are self-improving. I don't mean to hype this (judge for yourself from my skepticism on Reddit) but we're certainly at a stage where the code is writing the code, and the quality has increased dramatically. It didn't collapse as I was expecting.
What I don't know is why this is happening. Is it our experience, the architecture of our codebase, or just better models? The last one certainly plays a huge role, but there are also layers of foundation that now make everything easier. It's a framework, so adding new plugins is much easier than writing the whole framework from scratch.
What does this mean for hiring? It's painfully obvious to me that we can do more with less, and that's not what I was hoping for just a year ago. As someone who's been tinkering with technology and programming since age 12, I thought developers would morph into something else. But right now, I'm thinking that as systems advance, programming will become less of an issue—unless you want to rebuild things from scratch, but AI models can do that too, arguably faster and better.
It is hard to convey that kind of experience.
I am wondering if others are seeing it too.
Excited for the future :)
You're saying that a pattern recognition tool that can access the web can't do all of this better than a human? This is quintessentially what they're good at.
> The real question is how do you build personal AI that learns YOUR priorities and filters the noise? That's where the leverage is now.
Sounds like another Markdown document—sorry, "skill"—to me.
It's interesting to see people praising this technology and enjoying this new "high-level" labor, without realizing that the goal of these companies is to replace all cognitive labor. I strongly doubt that they will actually succeed at that, and I don't even think they've managed to replace "low-level" labor, but pretending that some cognitive labor is safe in a world where they do succeed is wishful thinking.
Do people really need to know that a bunch of code at a company that won't exist in 10 years is something worth caring about?
As for the chatgpt wrapper comment - honestly this take is getting old. So what? You are going to train your own LLM and run it at huge loss for awhile?
And yes perhaps all of this effort is for nothing as it may be even possible to reacted everything we have done from scratch in a week assuming that we are static and do nothing about it. In 10 years the solution would have billions of lines of code. Not that lines of code is any kind of metric for success but you wont be able to recreate it without significant cost and upfront effort ... even with LLMs.
You might be able to do more with less, but that is with every technological advancement.
Regarding your experience, it sounds like your codebase is such good quality that it acts as a very clear prompt to the AI for it to understand the system and improve it.
But I imagine your codebase didn't get into this state all by itself.
Since last 2 months, calling LLMs even internet-level invention is underserving.
You can see the sentiment shift happening last months from all prominent experienced devs to.
I expected the LLM's would have hit a scaling wall by now, and I was wrong. Perhaps that'll still happen. If not, regardless of whether it'll ultimately create or eliminate more jobs, it'll destabilize the job market.
Maybe there's a threshold where improvements become easy, depending on the LLM and the project?
As a hobbyist programmer, I feel like I've been promoted to pointy-haired boss.
The bookkeepers I work with used to spend hours on manual data entry. Now they spend that time on client advisory work. The total workload stayed the same - the composition shifted toward higher-value tasks.
Same dynamic played out with spreadsheets in the 80s. Didn't eliminate accountants - it created new categories of work and raised expectations for what one person could handle.
The interesting question isn't whether developers will be replaced but whether the new tool-augmented developer role will pay less. Early signs suggest it might - if LLMs commoditise the coding part, the premium shifts to understanding problems and systems thinking.
Case in point: web frameworks as mentioned in the article. These frameworks do not exist to increase productivity for either the developer or the employer. They exist to mitigate training and lower the bar so the employer has a wider pool of candidates to select from.
It’s like a bulldozer is certainly faster than a wheelchair, but somebody else might find them both slow.
And I always think: any of these users could have ran a basic grammar check with an llm or even a spellchecker, but didnt. Maybe software will be the same after all.
P.S. prob I jinxed my own post and did a mistake somewhere
ai -> AI
didnt -> didn't
obvious and many -> many obvious
These are posted by ... sometimes -> Sometimes these are posted by...
prob --> Prob(ably)
did a mistake -> made a mistake
somewhere -> somewhere.
Here what deepseek suggests as fixed:
Sometimes, while on an AI thread like this, I see posts with many obvious grammatical mistakes. Many will be "typos" (although some seem conceptual). Maybe some are dictated or transcribed by busy people. Some might be incorrect on purpose, for engagement. These are sometimes posted by pretty accomplished people.
And I always think: any of these users could have run a basic grammar check with an LLM or even a spellchecker, but didn’t. Maybe software will be the same after all.
P.S. Probably I jinxed my own post and made a mistake somewhere.
Of course semi-technical people can troubleshoot, it's part of nearly every job. (Some are better at it than others.)
But how many semi-technical people can design a system that facilitates troubleshooting? Even among my engineering acquaintances, there are plenty who cannot.
My guess is no. I’ve seen people talk about understanding the output of their vibe coding sessions as “nerdy,” implying they’re above that. Refusing the vet AI output is the kiss of death to velocity.
The usual rejoinder I've seen is that AI can just rewrite your whole system when complexity explodes. But I see at least two problems with that.
AI is impressively good at extracting intent from a ball of mud with tons of accidental complexity, and I think we can expect it to continue improving. But when a system has a lot of inherent complexity, and it's poorly specified, the task is harder.
The second is that small, incremental, reversible changes are the most reliable way to evolve a system, and AI doesn't repeal that principle. The more churn, the more bugs — minor and major.
Live and even offline data transformation and data migration without issues are still difficult problems to solve even for humans. It requires meticulous planning and execution.
A rewrite has to either discard the previous data or transform or keep the data layer intact across versions which means more and more tangled spaghetti accumulated over rewrites.
Managers and business owners shouldn't take it personally that I do as little as possible and minimize the amount of labor I provide for the money I receive.
Hey, it's just business.
Equally nihilistic are owners, managers, and leaders who think they will replace developers with LLMs.
Why care about, support, defend, or help such people? Why would I do that?
Do I want to lead a business filled with losers?
"Don't take it personal" does not feed the starving and does not house the unhoused. An economic system that over-indexes on profit at the expense of the vast majority of its people will eventually fail. If capitalism can't evolve to better provide opportunities for people to live while the capital-owning class continues to capture a disproportionate share of created economic value, the system will eventually break.
A business leader board that only consider people as costs are looking at the world through sociopath lenses.
Similarly, one might argue as increased capital finds its way to a given field, due to increased outcomes, labour in turn helps pressure pricing. Increased "sales" opportunity within said field (i.e people being skilled enough to be employed, or specialized therein) will similarly lead to pricing pressure - on both ends.
Fortunate or unfortunate, many procedural tasks are extremely hard for humans to master, but easy to AI to generate. In the meantime, we structured our society to support such procedural work. As the wave of innovation spreads, many people will rise but many will also suffer.
- Me, the last time it wasn't different
The hardest thing about software construction is specification. There's always going to be domain specific knowledge associated with requirements. If you make it possible, as Delphi and Visual Basic 6 did, for a domain expert to hack together something that works, that crude but effective prototype functions as a concrete specification that a professional programmer can use to craft a much better version useful to more people than just the original author.
The expansion of the pool of programmers was the goal. It's possible that AI could eventually make programming (or at least specification) a universal skill, but I doubt it. The complexity embedded in all but the most trivial of programs will keep the software development profession in demand for the foreseeable future.
At that time I had a chat with a small startup CEO who was sure that he'll fire all those pesky programmers who think they are "smart" because they can code. He pointed me to a code generated by Rational Rose for his diagram, and told that only methods should be implemented, which also will be possible soon, the hardest part is to model the system.
In particular the demand for software tools grows faster than our ability to satisfy it. More demand exists than the people who would do the demanding can imagine. Many people who are not software engineers can now write themselves micro software tools using LLMs -- this ranges from home makers to professionals of every kind. But the larger systems that require architecting, designing, building, and maintaining will continue to require some developers -- fewer, perhaps, but perhaps also such systems will proliferate.
Here's an archived link: https://archive.is/y9SyQ
If educators use AI to write/update the lectures and the assignments, students use AI to do the assignments, then AI evaluates the student's submissions, what is the point?
I'm worried about some major software engineering fields experiencing the same problem. If design and requirements are written by AI, code is mostly written by AI, and users are mostly AI agents. What is the point?
To replace humans permanently from the work force so they can focus on the things which matter like being good pets?
Or good techno-serfs...
In the US there was this case of a student using religious arguments with hand-waving references to the will of god for her coursework. Her work was rejected by the tutor and she raised a big fuzz on TV. In the end this US university fired the tutor and gave her a passing grade.
These kind of stories are not an AI issue but a general problem of USA as a country shifting away from education towards religious fanaticism. If someone can reference their interpretation of god's words without even actually citing the bible and they receive a passing grade the whole institution loses their credibility.
Today, the United States are a post-factual society with a ruling class of christian fanatics. They have been vulnerable to vaporware for years. LLMs being heralded as artificial intelligence only works with people who never experienced real intelligence.
Luckily, every year only a handful of people who have motivation, skills and luck are needed to move the needle in science and technology. These people can come from many countries who have better education systems and no religious fanaticism.
Speaking of tools, that style of writing rings a bell.. Ben Affleck made a similar point about the evolving use of computers and AI in filmmaking, wielded with creativity by humans with lived experiences, https://www.youtube.com/watch?v=O-2OsvVJC0s. Faster visual effects production enables more creative options.
In 2001, you needed an entire development team if you wanted to have an online business. Having an online business was a complicated, niche thing.
Now, because it has gotten substantially easier, there are thousands of times as many (probably millions of times) online stores, and many of them employ some sort of developer (usually on a retainer) to do work for them. Those consultants probably make more than the devs of 2001 did, too.
It's the dream of replacing labor.
They've already convinced their customers what the value of the product is! Cutting labor costs is profit! Never mind the cost to society! Socialize those costs and privatize those profits!
Then they keep the money for themselves, because capitalism lets a few people own the means of production.
So everything that looks cheaper than paying someone educated and skilled to do a thing is extremely attractive. All labor-saving devices ultimate do that.
By the 1860s artists were feeling the heat and responded by inventing all the "isms" - starting with impressionism. That's kept them employed so far, but who knows whether they'll be able to co-exist with whatever diffusion models become in 30 years.
Does it take less money to commission a single wedding photo rather than a wedding painting? Yes. But many more people commission them and usually in tens to hundreds, together with videos, etc.
An 18th century wedding painter wasn’t in the business of paintings, but in the business of capturing memories and we do that today on much larger scale, more often and in a lot of different ways.
I’d also argue more landscape painters exist today than ever.
You should ask the business owners. They are hiring fewer developers and looking to cut more.
I Built A Team of AI Agents To Perform Business Analysis
https://bettersoftware.uk/2026/01/17/i-built-a-team-of-ai-ag...
"But this agent knows my wants and needs better than most people in my life. And it doesn’t ever get tired of me."
That comment says everything about how you view yourself and your fellow humans.
But less thinking is essential, or at least that’s what it’s like using the tools.
I’ve been vibing code almost 100% of the time since Claude 4.5 Opus came out. I use it to review itself multiple times, and my team does the same, then we use AI to review each others’ code.
Previously, we whiteboarded and had discussions more than we do now. We definitely coded and reviewed more ourselves than we do now.
I don’t believe that AI is incapable of making mistakes, nor do I think that multiple AI reviews are enough to understand and solve problems, yet. Some incredibly huge problems are probably on the horizon. But for now, the general “AI will not replace developers” is false; our roles have changed- we are managers now, and for how long?
If it’s working for you, then great. But don’t pretend like it is some natural law and must be true everywhere.
The conversation shouldn't be "will AI replace developers". It should be "how do humans stay competitive as AI gets 10x better every 18 months?"
I watched Claude Code build a feature in 30 minutes that used to take weeks. That moment crystallised something: you don't compete WITH AI. You need YOUR personal AI.
Here's what I mean: Frontier teams at Anthropic/OpenAI have 20-person research teams monitoring everything 24/7. They're 2-4 weeks ahead today. By 2027? 16+ weeks ahead. This "frontier gap" is exponential.
The real problem isn't tools or abstraction. It's information overload at scale. When AI collapses execution time, the bottleneck shifts to judgment. And good judgment requires staying current across 50+ sources (Twitter, Reddit, arXiv, Discord, HN).
Generic ChatGPT is commodity. What matters is: does your AI know YOUR priorities? Does it learn YOUR judgment patterns? Does it filter information through YOUR lens?
The article is right that tools don't eliminate complexity. But personal AI doesn't eliminate complexity. It amplifies YOUR ability to handle complexity at frontier speed.
The question isn't about replacement. It's about levelling the playing field. And frankly we all are figuring out on how will this shape out in the future. And if you have any solution that can help me level up, please hit me up.
Your mention of the hellhole that is today's twitter as the first item in your list of sources to follow for achieving "good judgement" made it easy for me to recognize that in fact you have very bad judgement.
Like cool, you killed boiled a few gallons of the ocean but are you really impressed that you made a basic music app that is extremely limited?
But most enterprise software does not need to be innovative, its needs to be customizable enough that enterprises can differentiate their business. This makes existing software ideas so much more configurable. No more need for software to provide everything and the kitchen sink, but exactly that what you as a customer want.
Like in my example, I don’t know of any software that has exactly this feature set. Do you?
Really?
Is this reflected in wages and hiring? I work for a company that makes a hardware product with mission-critical support software. The software team dwarfs the hardware team, and is paid quite well. Now they're exempt from "return to office."
I attended a meeting to move a project into development phase, and at one point the leader got up and said: "Now we've been talking about the hardware, but of course we all know that what's most important is the software."
I can see the 2030s dev doing more original research with mundane tasks put to LLM. Courses will cover manual coding, assembler etc. for a good foundation. But that'll be like an uber driver putting on a spare tire.
Nothing can replace code, because code is design[1]. Low-code came about as a solution to the insane clickfest of no-code. And what is low-code? It’s code over a boilerplate-free appropriately-high level of abstraction.
This reminds me of the 1st chapter of the Clean Architecture book[2], pages 5 and 6, which shows a chart of engineering staff growing from tens to 1200 and yet the product line count (as a simple estimate of features) asymptotically stops growing, barely growing in lines of code from 300 staff to 1200 staff.
As companies grow and throw more staff at the problem, software architecture is often neglected, dramatically slowing development (due to massive overhead required to implement features).
Some companies decided that the answer is to optimize for hiring lots of junior engineers to write dumbed down code full of boilerplate (e.g. Go).
The hard part is staying on top of the technical (architectural and design) debt to make sure that feature development is efficient. That is the hard job and the true value of a software architect, not writing design documents.
[1] https://www.developerdotstar.com/mag/articles/reeves_origina... A timeless article from 1992, pre-UML, but references precursors like Booch and object diagrams, as well as CASE tools [2] You can read it here in Amazon sample chapter: https://read.amazon.com/sample/0134494164?clientId=share
Citizen developers were already there doing Excel. I have seen basically full fledged applications in Excel since I was in high school which was 25 years ago already.
It feels like programming then got a lot harder with internet stuff that brought client-server challenges, web frontends, cross platform UI and build challenges, mobile apps, tablets, etc... all bringing in elaborate frameworks and build systems and dependency hell to manage and move complexity around.
With that context, it seems like the AI experience / productivity boost people are having is almost like a regression back to the mean and just cutting through some of the layers of complexity that had built up over the years.
So now instead of one developer lost and one analyst created, you've actually just created an analyst and kept a developer.
Tim Bryce was kind of the anti Scott Adams: he felt that programmers were people of mediocre intelligence at best that thought they were so damn smart, when really if they were so smart, they'd move into management or business analysis where they could have a real impact, and not be content with the scutwork of translating business requirements into machine-executable code. As it is, they don't have the people skills or big-picture systems thinking to really pull it off, and that combined with their snobbery made them a burden to an organization unless they were effectively managed—such as with his methodology PRIDE, which you could buy direct from his web site.
Oddly enough, in a weird horseshoe-theory instance of convergent psychological evolution, Adams and Bryce both ended up Trump supporters.
Ultimately, however, "the Bryce was right": the true value in software development lies not in the lines of code but in articulating what needs to be automated and how it can benefit the business. The more precisely you nail this down, the more programming becomes a mechanical task. Your job as a developer is to deliver the most value to the customer with the least possible cost. (Even John Carmack agrees with this.) This requires thinking like a business, in terms of dollars and cents (and people), not bits and bytes. And as AI becomes a critical component of software development, business thinking will become more necessary and technical thinking, much less so. Programmers as a professional class will be drastically reduced or eliminated, and replaced with business analysts with some technical understanding but real strength on the business/people side, where the real value gets added. LLMs meaningfully allow people to issue commands to computers in people language, for the very first time. As they evolve they will be more capable of implementing business requirements expressed directly in business language, without an intermediator to translate those requirements into code (i.e., the programmer). This was always the goal, and it's within reach.
So yes, the market shifts, but mostly at the junior end. Fewer entry-level hires, higher expectations for those who are hired, and more leverage given to experienced developers who can supervise, correct, and integrate what these tools produce.
What these systems cannot replace is senior judgment. You still need humans to make strategic decisions about architecture, business alignment, go or no-go calls, long-term maintenance costs, risk assessment, and deciding what not to build. That is not a coding problem. It is a systems, organizational, and economic problem.
Agentic coding is good at execution within a frame. Seniors are valuable because they define the frame, understand the implications, and are accountable for the outcome. Until these systems can reason about incentives, constraints, and second-order effects across technical and business domains, they are not replacing seniors. They are amplifying them.
The real change is not “AI replaces developers.” It is that the bar for being useful as a developer keeps moving up.
On top of the article's excellent breakdown of what is happening, I think it's important to note a couple of driving factors about why (I posit) it is happening:
First, and this is touched upon in the OP but I think could be made more explicit, a lot of people who bemoan the existence of software development as a discipline see it as a morass of incidental complexity. This is significantly an instance of Chesterton's Fence. Yes, there certainly is incidental complexity in software development, or at least complexity that is incidental at the level of abstraction that most corporate software lives at. But as a discipline, we're pretty good at eliminating it when we find it, though it sometimes takes a while — but the speed with which we iterate means we eliminate it a lot faster than most other disciplines. A lot of the complexity that remains is actually irreducible, or at least we don't yet know how to reduce it. A case in point: programming language syntax. To the outsider, the syntax of modern programming languages, where the commas go, whether whitespace means anything, how angle brackets are parsed, looks to the uninitiated like a jumble of arcane nonsense that must be memorized in order to start really solving problems, and indeed it's a real barrier to entry that non-developers, budding developers, and sometimes seasoned developers have to contend with. But it's also (a selection of competing frontiers of) the best language we have, after many generations of rationalistic and empirical refinement, for humans to unambiguously specify what they mean at the semantic level of software development as it stands! For a long time now we haven't been constrained in the domain of programming language syntax by the complexity or performance of parser implementations. Instead, modern programming languages tend toward simpler formal grammars because they make it easier for _humans_ to understand what's going on when reading the code. AI tools promise to (amongst other things; don't come at me AI enthusiasts!) replace programming language syntax with natural language. But actually natural language is a terrible syntax for clearly and unambiguously conveying intent! If you want a more venerable example, just look at mathematical syntax, a language that has never been constrained by computer implementation but was developed by humans for humans to read and write their meaning in subtle domains efficiently and effectively. Mathematicians started with natural language and, through a long process of iteration, came to modern-day mathematical syntax. There's no push to replace mathematical syntax with natural language because, even though that would definitely make some parts of the mathematical process easier, we've discovered through hard experience that it makes the process as a whole much harder.
Second, humans (as a gestalt, not necessarily as individuals) always operate at the maximum feasible level of complexity, because there are benefits to be extracted from the higher complexity levels and if we are operating below our maximum complexity budget we're leaving those benefits on the table. From time to time we really do manage to hop up the ladder of abstraction, at least as far as mainstream development goes. But the complexity budget we save by no longer needing to worry about the details we've abstracted over immediately gets reallocated to the upper abstraction levels, providing things like development velocity, correctness guarantees, or UX sophistication. This implies that the sum total of complexity involved in software development will always remain roughly constant. This is of course a win, as we can produce more/better software (assuming we really have abstracted over those low-level details and they're not waiting for the right time to leak through into our nice clean abstraction layer and bite us…), but as a process it will never reduce the total amount of ‘software development’ work to be done, whatever kinds of complexity that may come to comprise. In fact, anecdotally it seems to be subject to some kind of Braess' paradox: the more software we build, the more our society runs on software, the higher the demand for software becomes. If you think about it, this is actually quite a natural consequence of the ‘constant complexity budget’ idea. As we know, software is made of decisions (https://siderea.dreamwidth.org/1219758.html), and the more ‘manual’ labour we free up at the bottom of the stack the more we free up complexity budget to be spent on the high-level decisions at the top. But there's no cap on decision-making! If you ever find yourself with spare complexity budget left over after making all your decisions you can always use it to make decisions about how you make decisions, ad infinitum, and yesterday's high-level decisions become today's menial labour. The only way out of that cycle is to develop intelligences (software, hardware, wetware…) that can not only reason better at a particular level of abstraction than humans but also climb the ladder faster than humanity as a whole — singularity, to use a slightly out-of-vogue term. If we as a species fall off the bottom of the complexity window then there will no longer be a productivity-driven incentive to ideate, though I rather look forward to a luxury-goods market of all-organic artisanal ideas :)
Knowing when to push back, when to trim down a requirement, when to replace a requirement with something slightly different, when to expand a requirement because you're aware of multiple distinct use cases to which it could apply, or even a new requirement that's interesting enough that it might warrant updating your "vision" for the product itself: that's the real engineering work that even a "singularity-level coding agent" alone could not replace.
An AI agent almost universally says "yes" to everything. They have to! If OpenAI starts selling tools that refuse to do what you tell them, who would ever buy them? And maybe that's the fundamental distinction. Something that says "yes" to everything isn't a partner, it's a tool, and a tool can't replace a partner by itself.
You're correct in that these aren't really ‘coding agents’ any more, though. Any more than software developers are!
I can kind of trust the thing to make code changes because the task is fairly well-defined, and there are compile errors, unit tests, code reviews, and other gating factors to catch mistakes. As you move up the abstraction ladder though, how do I know that this thing is actually making sound decisions versus spitting out well-formatted AIorrhea?
At the very least, they need additional functionality to sit in on and contribute to meetings, write up docs and comment threads, ping relevant people on chat when something changes, and set up meetings to resolve conflicts or uncertainties, and generally understand their role, the people they work with and their roles, levels, and idiosyncrasies, the relative importance and idiosyncrasies of different partners, the exceptions for supposed invariants and why they exist and what it implies and when they shouldn't be used, when to escalate vs when to decide vs when to defer vs when to chew on it for a few days as it's doing other things, etc.
For example, say you have an authz system and you've got three partners requesting three different features, the combination of which would create an easily identifiable and easily attackable authz back door. Unless you specifically ask AI to look for this, it'll happily implement those three features and sink your company. You can't fault it: it did everything you asked. You just trusted it with an implicit requirement that it didn't meet. It wasn't "situationally aware" enough to read between the lines there. What you really want is something that would preemptively identify the conflicts, schedule meetings with the different parties, get a better understanding of what each request is trying to unblock, and ideally distill everything down into a single feature that unblocks them all. You can't just move up the abstraction ladder without moving up all those other ladders as well.
Maybe that's possible someday, but right now they're still just okay coders with no understanding of anything beyond the task you just gave them to do. That's fine for single-person hobby projects, but it'll be a while before we see them replacing engineers in the business world.
I Built A Team of AI Agents To Perform Business Analysis
https://bettersoftware.uk/2026/01/17/i-built-a-team-of-ai-ag...
no need to worry; none of them know how to read well enough to make it this far into your comment
Well probably we'd want a person who really gets the AI, as they'll have a talent for prompting it well.
Meaning: knows how to talk to computers better than other people.
So a programmer then...
I think it's not that people are stupid. I think there's actually a glee behind the claims AI will put devs out of work - like they feel good about the idea of hurting them, rather than being driven by dispassionate logic.
Maybe it's the ancient jocks vs nerds thing.
Invest $1000 into AI, have a $1000000 company in a month. That's the dream they're selling, at least until they have enough investment.
It of course becomes "oh, sorry, we happen to have taken the only huge business for ourselves. Is your kidney now for sale?"
But you need to buy my AI engineer course for that first.
The Vibe Coder? The AI?
Take a guess who fixes it.
The reason those things matter in a traditional project is because a person needs to be able to read and understand the code.
If you're vibe coding, that's no longer true. So maybe it doesn't matter. Maybe the things we used to consider maintenance headaches are irrelevant.
a. generative technology but requiring substantial amount of coordination, curation, compute power. b. substantial amount of data. c. scarce intelectual human work.
And scarce but non intellectually demanding human work was dropped from the list of valuable things.
LLMs are a box where the input has to be generated by someone/something, but also the output has to be verified somehow (because, like humans, it isn't always correct). So you either need a human at "both ends", or some very clever AI filling those roles.
But I think the human doing those things probably needs slightly different skills and experience than the average legacy developer.
While a single LLM won’t replace you. A well designed system of flows for software engineering using LLMs will.
That's the goal.
(or rather, Business People)
Service-led companies are doing relatively better right now. Lower costs, smaller teams, and a lot of “good enough” duct-tape solutions are shipping fast.
Fewer developers are needed to deliver the same output. Mature frameworks, cloud, and AI have quietly changed the baseline productivity.
And yet, these companies still struggle to hire and retain people. Not because talent doesn’t exist, but because they want people who are immediately useful, adaptable, and can operate in messy environments.
Retention is hard when work is rushed, ownership is limited, and growth paths are unclear. People leave as soon as they find slightly better clarity or stability.
On the economy: it doesn’t feel like a crash, more like a slow grind. Capital is cautious. Hiring is defensive. Every role needs justification.
In this environment, it’s a good time for “hackers” — not security hackers, but people who can glue systems together, work with constraints, ship fast, and move without perfect information.
Comfort-driven careers are struggling. Leverage-driven careers are compounding.
Curious to see how others are experiencing this shift.
I think pressure to ship is always there. I don’t know if that’s intensifying or not. I can understand where managers and executives think AI = magical work faster juice, but I imagine those expectations will hit their correction point at some time.
who
Now the expectation from some executives or high level managers is that managers and employees will create custom software for their own departments with minimal software development costs. They can do this using AI tools, often with minimal or no help from software engineers.
Its not quite the equivalent of having software developed entirely by software engineers, but it can be a significant step up from what you typically get from Excel.
I have a pretty radical view that the leading edge of this stuff has been moving much faster than most people realize:
2024: AI-enhanced workflows automating specific tasks
2025: manually designed/instructed tool calling agents completing complex tasks
2026: the AI Employee emerges -- robust memory, voice interface, multiple tasks, computer and browser use. They manage their own instructions, tools and context
2027: Autonomous AI Companies become viable. AI CEO creates and manages objectives and AI employees
Note that we have had the AI Employee and AI Organization for awhile in different somewhat weak forms. But in the next 18 months or so as the model and tooling abilities continue to improve, they will probably be viable for a growing number of business roles and businesses.