A lot of people down on AI in this thread, but I'm watching the industry slip over the line of trust with these latest frontier models. GPT 5.5 is the first model good enough for me to just let rip.
Every jira ticket I see now has acceptance criteria, reproduction steps, and detailed information about why the ticket exists.
Every commit message now matches the repo style, and has detailed information about what's contained in the commit.
Every MR now has detailed information about what's being merged.
Every code base in the teams around me now has 70 to 90%+ code coverage.
Every line of code now comes with best practices baked in, helpful comments, and optimized hot paths.
I regularly ship four features at a time now across multiple projects.
The MCP has now automated away all of the drudgery of programming, from summarizing emails, to generating confluence documentation, to generating slide decks.
People keep screaming that tech debt is going to pile up, but I think it's going to be exactly the opposite. Software is going to pile up because developing it is now cheap.
Most code before llms sucked. Most projects I on-boarded to were a massive ball of undocumented spaghetti, written by humans. The floor has been raised significantly as to what bad code can even look like, and fixing issues is now basically free if your company is willing to shell out for tokens.
The ticket has subtle errors in its description that are only caught by someone experienced with the codebase.
The code hides an exception behind an if-then-else that defaults to the most common state, which isn't caught until it breaks things for the 1% of users who don't have that state.
The new feature quietly breaks a feature not covered by the acceptance tests.
The documentation is four times as long and nobody who relies on it can read it.
And I'm stuck spending my time going over tickets with a fine-toothed comb, reviewing PRs, and mentoring contributors to prevent all of this garbage from ending up in the live code.
People noted similar issues ever since LLMs came out, but the rate at which they have been rapidly improving on all of these is significant. Documentation being 4x too long could probably be fixed with a rule instructing the agent to keep it concise and no longer than 2-3 paragraphs.
> Software is going to pile up because developing it is now cheap.
Software to do what, though ?!
Coding, maybe 10% of a developers job (Brooks "Silver Bullet" estimates 1/6), was never the bottleneck, and even if you automated that away entirely then you've only reduced development time by 10% (assuming you are not doing human code review etc).
I would also argue that software development as a whole (not just the coding part) was also typically never the bottleneck to companies shipping product faster, maybe also not for automating their business faster (internal IT systems), since the rest of the company is not moving that fast, business needs are not changing that fast, and external factors that might drive change are not moving that fast either.
I think that when the dust settles we'll find that LLM-assisted coding has had far less impact than those trying to sell it to us are forecasting. There will be exceptions of course, especially in terms of what a lone developer can do, or how fast a software startup can get going, but in terms of impact to larger established companies I expect not so much.
+1 for any mention of Fred Brooks. I like your point about software as a whole not being a bottleneck. In the 1970s the hardware was co-evolving with business uses (it still is, but constraints were much more severe) leading to large headcounts on software projects that _absolutely_ had to work and _absolutely_ required uncommon expertise. Most people had no concept of a computer's capabilities, computer science was not as widely distributed.
One thing that I would point to today to show that the landscape is different - the average programmer/engineer/developer today has no actual admin staff. Fred Brooks' example team setup of "The Surgical Team" has more support staff than programmers. Anyone who responds to the questions like "who manages the calendar" and "who manages the documentation" will state that the engineers doing it themselves offer the best results. Same goes for designing test cases, performing rollbacks, etc.
The fact of the matter is that any self respecting engineer today works in an environment where pro-activity and self-sufficiency are prerequisites. Managing your calendar and workload, communicating to leadership and users, these are all common tasks that would have been another person a generation ago.
So when discussing writing code more efficiently and aiding in software development, what I am essentially seeing is more people trying everything they can to offload work that used to be another person's job anyway. If you care about communication - you offload coding standards. If you care about security - you offload feature refactors, and so on.
In my opinion, I think that at some point we'll either realize that we need highly competent people _and also_ regular people to help us ensure the work gets done to a good standard. Or, we will each eventually survive by working alone in a room with a suite of AI tools, and wonder why we're still making software in the first place.
As I recall, “No Silver Bullet” fundamentally rested on the assumption that the subroutine was the last word in abstractions to make programming more efficient, which probably wasn’t even defensible at the time because Lisp had already been invented, and is even less defensible after the past several decades of programming language research. Brooks was still onto something when it came to irreducible complexity, but offloading complexity an LLM can tackle to the LLM still saves time.
One of the lesser discussed Brooks essays is actually the best description of AI-first development: the “surgical team”. It just turns out the surgeon is the only human, and like many modern surgeries, the surgeon is controlling a robot instead of operating by hand.
It would be interesting to reread The Mythical Man-Month and see how each essay applies to AI-first development.
What you are describing is a the role of a manager, not a software engineer. Software engineering has very little to do with writing code, but more on architecting at the higher level on what needs to be done. The code is just the executional part. LLMs can code? Ok good. Without a clear architectural pathway / direction, that code is just useless. It's not tech debt. It's just a bunch of random strings. You can argue that Claude code and others do create a plan of attack - but still, it's not at the architectural level, but rather executional level.
To me, architecture starts all the way from the top - even before you write a single line of code, you do the DDD (Domain-Driven Design) and then create a set of rulesets (eg. use the domain name as table prefix) and contexts and then define the functionality w.r.t to that architecture. LLMs can do all this - only if you ask them to explicitly. So, they are pretty useful to brainstorm with, but not autonomously design reliably and push it to production with your eyes closed and support a 100,000 user base. It's a far cry from that.
But sure, you can upsell to management about the vanity metrics like lines of code and get that promotion with LLM. But, it's still not software engineering.
That's just not accurate. I haven't studied SWE Bench Pro in detail, so I can't tell you exactly what the flaw is, but SOTA models routinely make bad architectural choices I have to intervene to fix.
TL;DR its very effective as it directly tests model on REAL codebases: "The benchmark is constructed from GPL-style copyleft repositories and private proprietary codebases". The use case is very real.
It doesn't sound to me like this benchmark is attempting to measure architecture design. As far as I see in the paper, they do not evaluate the architectural quality of a task completion, only whether the model is capable of completing it at all.
It's "not software engineering" but neither was what most people writing code did before LLMs.
> Without a clear architectural pathway / direction, that code is just useless. It's not tech debt. It's just a bunch of random strings
This is pretty clearly false. It's a bunch of random strings that you can compile and run to do what you want. It's more akin to a black box. A compiled closed source dependency.
Agreed. I never considered myself an "engineer". Honestly just a regular code monkey. Software Engineer was just my job title. Folks higher up the ladder did engineer software. You know what? It sucked. Was always broken, we were always patching, we never saw around corners. But hey - they software engineered it.
> I regularly ship four features at a time now across multiple projects.
Many people are missing the fact that LLMs allow ICs to start operating like managers.
You can manage 4 streams now. Within a couple years, you may be able to manage 10 streams like a typical manager does today.
IME, LLMs don't speed you up that much if 1) you're already an expert at what you're doing (inherently not scalable), 2) you're only working on one thing (doesn't make sense when you can manage multiple streams), or 3) doing something LLMs are particularly bad it (not many remaining coding tasks, but definitely still some).
A manager doesn't have to look at the code that's being shipped. An IC will still need to do that, and this will eventually take up much of their work. It can be addressed by moving up the stack to higher level and more strictly checked languages, where there's overall less stuff to review manually.
People typically think it's not a new person's fault if they come in to a team and bring down production.
That's a failure of the existing infrastructure to allow someone to do this.
LLM coding will work like this.
If you're letting LLMs go wild with no system in place to automatically know they're moving in the right direction and "shipping" things up to your standards, the failure is you, not the LLM.
Just like a manager, you don't need to look at the code. You need to set up quality systems to provide evidence the code does what it is supposed to do, just like a manager.
Code review has a number of important purposes beyond merely verifying functionality. It's true that some managers don't recognize this, fail to allocate time for anything but feature work, and then wonder a few years later why the software is so buggy and new feature development is so hard.
Software engineers were always creating, maintaining and updating automated business processes. In olden days we would have computers, that is rows of people computing things. That room of people is replaced with code in von Neumann machines.
The economic tension has always been a resistance to grant programmers status and class of management. Instead management wants to treat programmers like labor.
For people who like to tick boxes, which is essentially most of the above, AI is welcome. That includes managers.
It still has nothing to do with software engineering. All good code was written by humans. AI took it, plagiarizes it, launders it and repackages it in a bloated form.
Whenever I look deeply at an AI plagiarized mess, it looks like it is 90% there but in reality it is only 50%. Fixing the mess takes longer than writing it oneself.
"Writing code" as a task of its own is called cowboy coding. It's neat that AI can do this now, but that has nothing to do with proper software engineering which always starts from a careful, human-led design.
You call someone an author when they use a ghostwriter. They're giving inputs that are core to the output, even though they aren't doing all the writing. Same thing.
Yes and every AI-first development workflow worth its salt does exactly this, and it does it much more thoroughly than I’ve ever seen a team of meatbags do it.
My workflow, at a high level, is:
1. I write a high level spec. Not as high level as a single-sentence prompt, but high level enough to capture my top requirements.
2. I prompt the AI to interview me about the spec to clear up any ambiguity or open questions, then when I’m satisfied, the AI writes a longer spec, which I then review.
3. Then I prompt the AI to write an implementation plan based on the spec. I might just skim this, and by this point I might be asking the LLM more questions than it’s asking me.
4. Now I hand it off to the implementer agent.
This isn’t cowboy coding, it’s not even agile. It’s waterfall. The problem with doing waterfall was that it’s too slow, especially with the deserialization/serialization cost of routing all of this documentation through meatbrains. The LLM is doing just as much work, true, but faster.
The thing I found surprising was that, while LLM’s are still pretty awful at writing as an art form, they are better technical writers than I have the time to be, especially when writing for an audience of other LLM’s.
The hard part of software engineering is turning a vague problem description into a set of box-ticking exercises. If ticking boxes became genuinely easier, the software engineering part is now a lot more valuable.
You’re reminding me a lot of those old assembly hackers who thought compilers were bullshit because they could hand-write better assembly. And I don’t mean that as an insult; those guys were probably right about their assembly code, just like an Amish craftsman will make better furniture than a factory in China. The problem is that the world needs more furniture and more software than skilled craftsmen can produce, and the skill gap between the craftsman and the mass production process is diminishing fast.
We’re still going to have handwritten software, just like we still have handwritten assembly. It just won’t be the norm.
> Your linter should identify all issues - including architectural
If a linter could deterministically identify bad architecture, you wouldn't need an LLM, your linters could just write your code for you. The vibe coding takes are just getting more and more empty-headed...
Your custom linters don't check architectural design?
linters statically check code and provide deterministic recommendations. LLMs are used to make judgement. I specifically write my linters for my project to make recommendations for LLMs.
This is how you save on token usage, so your LLMS aren't wasting tokens on static analysis that a linter could do for free.
Incredibly impressive how, the moment AI becomes the topic of conversation, trivial things such as speaking in relative terms become incredibly difficult for the more addled of the prompting users.
> and fixing issues is now basically free if your company is willing to shell out for tokens.
Does "basically free" to you mean for you just that someone else is paying the cost? That's a mentality that has only made the world worse when applied to a wider range of things. Be hesitant in that line of thinking, I suggest, and consider the future.
The gap between the ai haves and have-nots is starting to appear. 6 months ago a developer with copilot was about on par with one without. The AI code required a lot of review, about the same amount of time as writing the code manually.
Now.. the AI first engineer might still have to deal with hallucinated things. But.. they can also use the newfound cheapness of code to improve their workflow. Instead of just testing on localhost and manually deploying to prod, you can have a full dev, staging, prod pipeline for free. Tech debt can be one command from being refactored. The open source package that doesn’t quite do what you need it to do? Fork it and write a patch. The ai will be able to maintain the patch. Oh.. you need that bespoke feature for management? Np, done in a 1hr ai session.
Each of these things might be arguably insignificant on their own but net over a projects lifetime they really build up.
I think numerically this is the exception - and it's a fantastic exception! But in practice what I've seen is things getting worse because people still just aren't very good at thinking, so the great-looking Jira ticket actually turns out to be nonsensical in some subtle way, whereas before it was just lacking in some obvious way that could immediately be called out and had an obvious solution.
I.e. it's making good output better, but it's making mediocre output (which is most output) worse by adding volume and the appearance of quality, creating a new layer of FUD, stress, tedium, and unhappiness on top of the previously more-manageable problems that come with mediocre output.
I'm still seeing this even with the newest models, because the problem is the user, not the model - the model just empowers them to be even worse, in a new and different way.
I agree with most of this, I just have sort of turned a blind eye to what the code actually probably looks like. Reviews are rapid, and I’ll admit I do feel like I’m betraying my inner programmer by just optimizing directly against the claims of token bot. But the way I see it, as long as the numbers don’t lie I’m okay with the process.
Everyone talks about productivity as if that is the only metric that matters in the business.
The MCP has now automated away all of the drudgery of programming, from summarizing emails, to generating confluence documentation, to generating slide decks.
I wonder about the hallucination. Reading someone's writing doesn't take all that long.
Is programming supposed to suck all the time? Am I doing it wrong? I mean yeah, sure, it sucks sometimes, but overcoming that "suck" is where I feel progress and growth. If we decide to optimise that away...What the fuck am I doing here? No offence to managers, but if everybody is a manager, is anybody?
Feels kind of like the problem of everybody wanting to be an entrepreneur in the 2010s. Just led to people basically trying to get paid to be middleman companies skimming from others that don’t really need them, or worse, selling supplements and life coaching or whatever on social media and other grifts.
> GPT 5.5 is the first model good enough for me to just let rip.
You know this is the exact same thing said during Opus 4.6, right?
That makes it hard to believe because it's the same "last week's model was so much behind you can't even comprehend" meme that's been going on throughout last year.
More info dumped into tickets and projects is great for understanding for both people and LLM. But hopefully not LLM generated.
There wasn't any personal mention in my post. A snark remark at the fact that this cycle keeps continuing and every new release is game changer except in the banchmarks where there is mostly a slight couple percent change, generally.
You're missing the point that it's (conceivably, and probably) different people making the comments. Each model release has a few new converts, which is expected if the models are in fact getting better at agentic coding.
You're implying it's a hype train when in fact it's an adoption curve.
> which is expected if the models are in fact getting better at agentic coding
Is it? Or is it also explainable that the models are not getting better but people are still adopting it.
If the models were getting we’d be seeing mobile apps with new features at 10x the rate previously, or websites with 4 times the number of features. But we’re not.
It's just cope. I'm so close to just never coming back to HN because the quality of thought has just gone through the floor. Anything whatsoever to hedge one's way to fellating a phallusless chatbot
> fixing issues is now basically free if your company is willing to shell out for tokens.
Yeah, about that: I looked into Cursor's usage stats and daily I'm going through the equivalent of a bacon sandwich in my cantina, so not much, but this is at today's prices and very light usage of Sonnet.
I was for a time using Opus 4.6 for a heavier task and even then I think the cost was well into the double digit percentages of my salary.
Opus 4.7 reportedly uses more tokens overall and while they reportedly kept rates stable, that is not a given.
Just wait until, with increasing costs, the first company figures that they'll offer this as a benefit and then maybe scrap it altogether in the name of cost cutting.
Watch token budget be included as part of employee TC figures - I feel this is an eventuality due to rising costs and "true pricing" slowly creeping in.
Current ventures feel moreso like a pilot program (you bought a private jet, now get a couple of your pilots to actually fly it) versus having an entire fleet of jets, and having to pay salaries to all those pilots, plus account for their fuel charges.
Right now all expenses are relatively "someone else's {problem,money,infra}".
I was an LLM naysayer for a very long time. I continue to have serious reservations about the ethics of LLM use and the likely economic effects (these tools are likely to empower the owners of capital and disempower labor). On the other hand, I had a rather striking experience the other day that convinced me that the future in which these tools write software may not be so bad:
I had an idea to improve performance in one of the slowest but also one of the most critical parts of the codebase I own, so I asked Claude to re-write it. I gave it exact instructions. It got most things right but key things wrong. I caught the bugs and then asked it for some optimizations, and it came up with a number that were quite good. As I read the code, I saw more and more opportunities for improvement. To make a long story short, code that used to require upwards of 30 seconds in a particularly heinously ugly stress test now finishes in about 8ms.
My original code was terrible. That's indisputable. Maybe the bar for improvement was low. Still, the algorithms and optimizations that I was able to devise while using Claude Opus 4.6 surprised me. I don't often feel pleased with the cleverness of my work, but in this case the work really is stellar -- or at least enough of an improvement that it feels stellar.
Could I have written it without Claude? Yes, definitely. But I was able to produce the code in a few days while having a fever of 100-102, which I definitely couldn't have done on my own.
Moreover, it was plainly apparent to me, while I worked, that I was better able to think about high-level architecture and design because I wasn't stuck on the details of actually writing the code. The code itself, line by line, isn't difficult if you have familiarity with bitwise operations, but there's enough of it, with enough branches, that it's difficult as a whole and the work of writing it would have consumed much of my attention and energy.
Claude missed a huge amount. I improved performance by more than 95% after it told me there were no other opportunities for major optimizations.
Using the tool freed me, I found, to think more clearly, more deeply, and more effectively. Does the result create tech debt? I don't think so. I've pored over it and can't find anything lacking in style, design, or architecture. It's very well documented. Claude wrote tests, as I requested, for everything, including all the bugs that Claude missed and I caught. Test coverage is probably 100%, but, much more importantly, tests exhaustively cover cases, including edge cases, that would have, again, been difficult to enumerate and write by myself.
I doubt Claude could have done all this as well if the codebase and tests weren't already as mature as they are. I really wonder about the feasibility and advisability of greenfield software development with these tools. And a junior developer absolutely couldn't have accomplished what I did. The tool would have produced far worse work in the hands of someone who doesn't know what they were doing.
So I agree with you and disagree: I'm turning a corner on these tools, but I absolutely could not just let rip and trust it to do anything correctly. Moreover, I could not be less impressed by the MCPs written by people in my company. The bare tool by itself is better, though maybe that says more about my company, and my regards for the people I work with, than the tools.
> Could I have written it without Claude? Yes, definitely. But I was able to produce the code in a few days while having a fever of 100-102, which I definitely couldn't have done on my own.
While I admire your strength in attempting it, this just adds one more brick to the wall of precedents that "what's stopping you from just sending one prompt, it'll just take 30 seconds and you can do it in bed!"
You could sum it up into a simple equation as
Features Shipped = Features/Hour * Developer Hours
Developer hours has remained a constant, and F/H has gone up. I am of the opinion that the ideal is the inverse.
If writing code was the only part of the job, and it was easy, these jobs wouldn't pay so well.
Engineering is hard. It's always going to be hard. I'm glad that AI makes some parts of it easier, and we (software engineers) can focus on engineering, that's nice.
Code is NEVER cheap. Just because, at current completely unrealistic AI pricing, using agents is cheaper than hiring juniors, does not make code cheap. It makes producing code cheap, which has always been low-cost. Every line of code is a cost, is a maintenance burden, is complexity. An AI, even with somehow infinite context window, will cost more money the more code you have.
Could you replace a whole team of engineers with AI? Probably, yeah. Could you simply fire everyone at your company and close it down, without much of a problem? Also probably yes, for most companies.
AIs can help with debugging, can help with writing code, with drafting designs, they can help with almost every step. The second you let OpenAI, or Anthropic, take full code ownership over your products, and you fire the last engineer, is the time when the AI pricing can go up to match what engineers make today. You've just reinvented the highly paid consultant.
Or you could take the middle-ground and hire good engineers, make sure they maintain an understanding of the codebase, and let them use whatever tools they use to get the job done, and done well. This is the way that I've seen competent companies handle it.
I don't understand people dismissing the massive decrease in both cost of producing code and the speed of producing code.
Before AI, people running businesses had similar issued as people have with AI now, but the costs were much greater.
They could hire someone to write them a prototype for their idea, but it would cost them on the order of 1000s of dollars, and it would take weeks at the minimum!
Now it could cost them 20$ and be done in a few days. The feedback loop is the bottleneck.
I'm not dismissing it, I'm saying it's never been the bottleneck. Like you said, the feedback loop is A bottleneck, so is figuring out all the nasty things that nobody on HN seems to have heard of before, like "what are the requirements", "which tradeoffs can we make to get this done in time", "what is the architecture for this", and building it in such a way that you can guarantee that it won't fall apart in 20 years when your requirements change.
I understood it in the spirit of “code is a liability not an asset.” Code still needs to be maintained, changed, etc whether that is by a human or LLM.
In other words, just because more code can be produced quickly does not mean that it is cheap.
edit: I’m maybe hearing your point is that LLMs may change that POV but I think that is TBD.
I don't really think that's accurate either, from a business POV.
Software has been such a gold mine, exactly because the maintenance are minimal when you scale, compared to the revenue. The upfront costs are expensive, but once you have software built, in most cases it's relatively cheap to maintain
Certain types of code are cheap. Proof of concept is cheap. Adding small features that fit within the existing architecture is cheap. Otherwise, I'm not so sure. Coding agents are fantastic at minutiae, but have no taste. They'll turn a code base into a ball of mud very quickly, given the opportunity.
While I agree with you that agentic coding still has quite a way to go and is not always producing the quality that I would want from it, I can say quite confidently that its baseline is way above some of the production code in many applications many people use today. It really isn’t that code before agents was primarily written with taste and beautiful structure in mind. Your average code base is a messy hell full of quick fixes that turned into all kinds of debt over the years.
I took the previous post, with its mention of the ball of mud, to be about complexity.
“Taste”, is used in many cases, I suspect, to give a name the collection of practices and strategies developers use to keep their code and projects at a manageable level of complexity.
LLMs don’t seem to manage complexity. They’ll just blow right past manageable and keep on going. That’s a problem. The human has to stay in the loop because LLMs only build what we tell them to build (so far).
BTW, the essay that introduced the big ball of mud pattern to me didn’t hold it up as something entirely bad to be avoided. It pointed out how many projects — successful or at least on-going projects — use it, and how its passive flexibility might actually be an advantage. Big ball of mud might just be the steady state where progress can be made while leaving complexity manageable.
I think there are at least two factors behind ye olde ball of mud that LLMs should be able to help with:
1. Lack of knowledge of existing conventions, usually caused by churn of developers working on a project. LLMs read very quickly.
2. Cost of refactoring existing code to meet current best practices / current conception of architecture. LLMs are ideal for this kind of mostly mechanical refactoring.
Currently, though, they don't see to be much help. I'm not sure if this is a limitation in their ability to use their context window, or simply that they've been trained to reproduce code as seen in the wild with all its flaws.
Keeping complexity down is always a conscious act. Because you need to go past the scope of the current problem and start to think about how it affects the whole project. It’s not a matter of convention, nor refactoring. It’s mostly prescience (due to experience) that a solution, even if correct and easy to implement, will be harmful in the long term.
Architecture practices is how to avoid such harmful consequences. But they’re costly and often harmful themselves. So you need to know which to pick and when to start applying them. LLM won’t help you there.
I agree. I do wonder if what I'm seeing is a limitation of the reasoning power of LLMs or if it's just replicating the patterns (or lack thereof) in the training data.
Preproduction code was always cheap or even free. Sales people have been selling software that didn't do what was on the tin since the dawn of time. Those features cost 0 dollars to write!
Production code. Especially production code with bugs is expensive. It can cost you customers, you can even get negative money for it in the form of law suits.
Coding agents are great for preproduction and one offs. For production I really wouldn't chance it at any scale above normal human output.
Except here's the thing, that's the sort of code that was extremely expensive before, in large part because of our day jobs (which still to this day require mindfulness and can't just be vibe-coded).
However, an extra script here or there to make your life easier, adding extra UI features based on some datapoint to your internal dashboard, ect, these were things that could've taken a few days you didn't have before to get exactly right and now they can be done with only a few minutes of attention.
I think back to my past jobs how there were people who'd work weekends on random (not to technically difficult to implement) efforts that probably got them promoted, but we would never make time for during the regular work week that now would take next to no time to implement with AI.
Anyone with any small amount of creativity for this sort of thing could really make a big difference on improving the productivity of all sorts of team wide investigations as a running background task they have during their regular work.
I came here exactly to point out what I'm glad to see is 10. "Free as in puppies" is a wonderful way to put it.
Every time I open linkedin I'm scared of how many big heads have taken the wrong lesson that coding almost free == free engineering. So many bait posts asking engineers why they would need to pay them any longer, or being glad they're generating millions of lines a month....this is going to end badly.
> 10. Code is cheap, but maintenance, support, and security aren’t.
I also keep circling around this point. So many software repositories in the AI space seem to follow a publish and forget pattern. If you simply can show that you have the patience to maintain a project, ideally with manual intervention instead of a fully autonomous AI, then you already have an outstanding project.
I had a business owner tell me that they don't need to hire juniors anymore because claude can do all of that work for them. This was not a software shop so it's not even about writing code but I also thought that was something that will bite in the near future. A business that is not investing in juniors is a business that is not investing in the future.
The role of AI in non-software shops is going to be interesting. To a great extent it's not competing with devs, it's competing with Excel. However bad a system your AI can produce, it can't compare to the workflows that a group of non-techies armed only with Office can produce.
On the other hand, like giving a supercar to a teenager, this just enables them to get into trouble faster.
(the "my vibe coded app deleted prod!" stories are funny schadenfreude when they happen to SV startups, whose whole business is pretending to know better. When this happens to a small business who've suddenly lost all their finanacials and now maybe will lose their house, it's a tragedy. And this can happen on a much larger, not AI-related scale, like Jaguar Land Rover: https://www.bbc.co.uk/news/articles/cy9pdld4y81o )
> The role of AI in non-software shops is going to be interesting
I have friend in west Texas who does industrial electrical gear sales (like those giant spools of cable you see on tractor trailers). He’s 110% good old boy Texan but has adopted and loves AI. He says it’s been a huge help pulling quotes together and other tasks. Coincidentally he lives in Abilene where one of the stargate campuses are going. Btw, the scale of what’s being built in Abilene is like nothing I’ve ever seen.
Agreed, but a worrying amount of managers and leaders spend time there for reasons I never fully understood, so it offers a glimpse into their worldview.
The issue is that when you gaze long into an abyss, the abyss also gazes into you.
I am in India, junior developer hiring is all down. Ai has reduced offshoring to India and eliminated the need for janitor work (often offloaded to juniors).
Many people are finding it difficult to even land internships.
The most affected areas are sysadmin, devops, and frontend. Where you'll have very hard time getting any offer.
Companies like BrowserStack are withdrawing campus placement offers.
Meanwhile, I am writing apps for my own use and have reached 10,000+ monthly active users already, even though I am making zero money from doing all this, but it's fun.
Looking at the entire market in Europe it is also down but that is not due to "AI" but because they are easiest to fire with least consequences. There is a global recession looming, despite Wall Street saying otherwise.
Guy works for the Overture Map Foundation, with Amazon, Microsoft etc. being sponsors. He has been boosting AI all over the Internet. I'm sure Microslop and Amazon are very happy with these efforts.
I'm glad that "10 ways to do X" submissions are allowed as long as they boost AI.
Are you suggesting that Microsoft and Amazon's sponsorship of Overture comes with an understanding that people who work on Overture will spend their time writing articles that "boost AI"?
Does "boosting AI" include opening an article with "Frontier models are really good at coding these days, much better than they are at other tasks"?
Can't speak for the former, but the latter question: yes.
"Product is really good at X, much better than at Y" does not imply that it's bad at Y, and even if it did, if you're targeting an audience that only cares about X, who gives a shit about Y? Might as well throw Y under the bus to boost the perceived effectiveness of product at X even more in comparison.
This is such a weird argument, beside obvious #10 which will bite back with a vengeance, because... code can't be cheaper than free!
Since at least the early 80s a LOT of very important code wasn't cheap, it was free. Both free of cost (you could "just" download it and run it) but also free as freedom-respecting software.
I just don't get the argument that cheap is new. Cheap is MORE expensive than free!
Short-short version, code will still be accruing value in proportion to how much of the real world it has encountered. The bottleneck on building valuable code will be how much real world there is to go around. As is so often the case, what may initially seem to kill SaaS will actually make them stronger as they end up with more exposure to the real world than some random guy's random AI code.
It’ll be priced slightly higher than the cost to actually run. But it’s still not clear what the real cost of the big models is. They seem very subsidised, but by how much?
It remains an unproven hypothesis. The revenue of the top 2-3 labs is still growing nearly exponentially, which is the ultimate piece of data that settles the question empirically for now. Benchmark scores aren't really proof. Benchmaxxing is possible, for example. Only revenue numbers (and gross margins) count.
The ultimate piece is not revenue but profit. At some point these enormous investments will have to be earned back. Good luck with that when open weight models are also continuously improving, have cheap providers and for many are already very usable.
The other point to make is that companies are starting to worry about the risks of externally hosted models.
This is at multiple levels if you have a remote API call as a key part of your workflow/software system.
1. Price risk - might be affordable today - but what about tomorrow?
2. Geopolitical risk - your access might be a victim of geopolitics ( seems much more likely that it used to be ).
3. Model stability/change management - you've got something working at the API get's 'upgraded' and your thing no longer works.
If you are running on open weight models - you are potentially fully in control - ( even if you pay somebody to host - you'd expected there to be multiple hosting options - with the ultimate fallback of being able to host yourself ).
You can easily develop with models like GLM 5.1 and Kimi k2.6 at a fraction of the cost of GPT 5.5 or Opus 4.7. Requests often cost just a few cents.
Open-source models have caught up tremendously recently. Those who can’t or don’t want to invest a lot of money can already develop with Kimi and GLM without any problems. We don’t have to wait another year for that.
Tried deepseek 4 w/ CC yesterday, and was watch my usage eke up by only 0.01 at a time while doing plenty of high-token-count tasks. I understand it's currently at a discount, but even after that expires the same-quality output will be available at a fraction of the cost of the expensive models.
From experience, the same level of usage would have left me stranded on my CC 5 hr limit within an hour.
There were some difficulties with tool calls, in particular with replacing tab-indented strings - but taking no steps to mitigate that (which meant the model had to figure it out every time I cleared context) only cost relatively few extra tokens -- and it still came in well under 4.6, nevermind 4.7. And of course, I can add instructions to prevent churning on those issues.
I have no reason to go back to anthropic models with these results.
Sure, but there will always be some monstrosities like Mythos that'll pwn all software written by local models in 0.01 seconds, thus forcing people/companies to use the most advanced paid models to keep up and stay unpwned for 1 second longer.
You cutoff a generation of juniors from employment and learning , the seniors are gone and it's all harnesses and AI systems.
I'm not all gloom and doom but the treatment of junior engineers is something I think we will either regret or rejoice. Either will have a spur of creative people doing their own independent thing or we'll have lost a generation of great engineers.
If you fire all your SWEs they won't sit around twiddling their thumbs waiting for an AI collapse, they'll career shift. Maybe to an unemployment line and/or homelessness, maybe to something else productive, but either way they'll lose SWE skills.
If you close down all the SWE junior positions you'll strongly discourage young people training in the field. They'll do something else.
Then if you want to go back, who will you hire for it?
They are large language models. Not automated development machines. They hallucinate.
The goal post has not shifted since 2023 or so. Make an LLM that doesn't blatantly disregard knowledge it has, instructions it has been giving, over and over, and you win. If trillions of USD of investment can't do it, I'd be curious to see what can.
There are definitely automated dev systems, of which an LLM is a part. The remaining part may be called a 'harness' or whatever. The quality of the generated software is another matter.
If the AI is not good enough, then don't fire the devs. If/when the devs are no longer needed, I don't see why the need would return later, that was my point.
A harness like Claude Code does not turn an LLM into a software developer.
If that was the case companies could just have their project managers managing Claude Code instead of developers, and they would immediately realize that using Claude Code to develop software is just as complex and geeky as it ever was - nothing changed in that regard.
A harness and a bunch of skills is just the new "think step by step" prompting technique. Don't just let the LLM rip and write a bunch of code, but try to get it to think before coding, avoid things like churning the code base for no reason, and generally try to prompt it to behave more like a developer not an LLM. Except it still is an LLM.
A coding agent is really not much different to a chat "agent" in this regard. You've got the base LLM then a system prompt trying to steer it to behave in a certain way, always suggest "next step", keep to a consistent persona, etc. None of this actually makes the LLM any smarter or turns it into a brilliant conversationalist, anymore than the coding agent giving the LLM a system prompt magically turns it into a software developer.
The problem of "instant legacy" systems: something that's vibe coded and reached unmaintainable by either the AI or humans, but is also now indispensable because users are relying on it.
Some of that is already there .. but the users generally have nowhere else to go and ineffective pushback. "Enterprise software" has been awful for decades, things like Lotus Notes and SAP. Everyone hates Windows; everyone continues to use Windows.
Users don't currently trust software. Look at what we've done to them - can you blame them?
The consumer space is about extracting every ounce of personal data possible.
The b2b space is about "maximizing customer value" - that is, not maximizing the value of your product to the customer, but maximizing the value of the customer to your business. Lock them in and lock them down, make your product "sticky" so they can't leave without immense cost.
There will always be competition. For every company negatively impacting customer experience and their own ability to compete, there will be others happy to step in and take advantage of that.
Not all code is cheap. Some code remains very expensive.
But the idea that some code is cheap and some code is expensive is not new.
The only new thing is there are some adjustments on how to asses the value of the code you’re presently, or about to, work on.
AI has absolutely expanded the set of code that is cheap and if you can make a thing easily with AI then so can someone else. That project is unlikely to result in valuable code. Which is not to say it doesn’t have utility. Just its monetary value is low.
Code is a liability - the more there is, the higher potential for bugs and poor performance. I'd recommend treating cheap code like cheap toxic waste, and try to minimize how much is generated.
"We" should not do anything. The LLM industry should go and find solutions for the problems they created, themselves. Not offload it to others through sneaky influencer posts. And we should hold them responsible, should they not be able to address the problems they are creating.
I used to work as a VP and a part of my responsibilities was to chop up tasks to self-contained work units that can be easily assigned to random devs. This was both morally problematic for humans (i.e. EVPs forced treating human = CPU) and very optimistic when it came to individual dev capabilities and domain knowledge. However, this style is precisely what works well with agentic AI coding and I have no qualms to use it.
#10 needs more emphasis than it receives. Cheaper code doesn't automatically lead to good product decisions.
Instead of focusing on whether you can build it, the scarcer resource becomes whether you should build it. And most teams lack a clear process for addressing this latter question. Requirements are collected in all sorts of places without ever being prioritized in an organized fashion. This is exacerbated by cheaper code. With cheaper code, you can release five times what you used to be able to release in a given period of time, but only if you knew which five products you needed.
For most teams, whether or not you can say no to building something is ambiguous at best, at least if you wish to stay on that team and at that company. It's definitely one of the things that has made me vote with my feet in the past. With agentic coding, the ability to say no is pretty much gone because the perception is that it's just one more parallel thing we can throw an agent at.
The thing I see from agentic adoption that I find lamentable as a software engineer is that timeline expectations have collapsed to absurdity. You can plan a project to do a major migration, do all the estimations on how long something will take, and if you give an answer that says weeks and cite the evidence, product and leadership will now claim it should take days, citing their ai's design.
It's exhausting. Even if you are an expert, you now have lost the implicit trust that came from years of building political capital, shipping efficiently, and delivering value for multiple companies, because a different prompt with different context from the one you provide gave a different answer than what you did.
During delivery, if you read your code produced line-by-line and review for correctness, and put in additional guardrail automations that slow the automated build, and ship 4 times a day with a defect rate of 5.4% with agentic coding, you are compared unfavorably to teams with a change defect rate of 15.7% that ship 13 times per day, because you are too slow.
And you are individually compared with whole team outputs. Even if you deliver at a rate ten times greater than the worst contributor at your company, if you are not outputting code at the rate of an entire team of 5, you are not meeting the expectations of product and leadership anymore.
All of this is to say, yes, people are looking at software engineers as both the bottleneck and unnecessary, even at high technology companies, right now. They are looking at them that way because they have their own agents that are biased to think that the engineering claims are wrong and agents are sycophantic.
Would add my biggest tip to that: TDD. Most people omit it.
There is a difference between:
- write code, write tests
And
- write tests, write code
Had another agentic (vibe) coding experience, which confirmed that for me. Creating an SDK for a $500 light so I can control it from my Steam Deck instead of my phone (no SDK existed before yesterday). For anyone interested, I'm teaching my vibe coding (I meant agentic) tutorial at pycon next week. The 3-hour-long version should be posted to YouTube soon thereafter.
Make usable software. Cheap code means that you can create a lot more prototypes to then perform usability tests by finding a user and sitting next to them. I mostly worked on internal apps lately, so perhaps it's much easier for me to do than it is for some others.
The pure "coder" role, per that paper, died out almost immediately. Nowadays it's done by compilers (a deterministic automation). The distinction between analyst and programmer held out a bit longer - ten years ago I was working somewhere that had "business analysts", essentially requirements-wranglers. It's possible that the "programmer" job of converting a well-defined specification into a program is also going to start disappearing.
.. but that still leaves the specification as the difficult bit! It remains like the old stories with genies: the genie can give you what you ask for. But you need to be very sure what you want, very clear about it, and aware that it may come with unasked-for downsides if you're not.
I think you can boil down most of the list to: Understand what you want to do.
I’m not convinced about rebuilding repeatedly as a learning tool though. As relatively quick as it is, it over emphasizes the front line problems you face early. Those tend to be simpler, more straightforward issues that can be more quickly solved by a few minutes of thought (and more cheaply too).
Code might be cheaper but it's still a liability. In that regard anything that's not been properly designed and documented is going to be an even bigger issue.
Stick to patterns which were painful before. For example, I recently refactored a project written in TS to use better-result instead of throwing errors. Without Claude writing out all of that boilerplate I could not have imagined transitioning to this. Right now the cost of "doing it right" is decreased so much there is no reason to ship slop / poorly thought out code.
People should do what has always been needed, rather than focus on how hard it is to build something, or easy, find what is needed, what right is, what good is, what quality is that actually solves problems and do those things.
I've found the get-shit-done tool[1] to be quite useful for forcing me to properly plan the implementation and ensuring the context remains small and relevant at all times.
It is slower than when I was just using Claude directly though.
I've tried this, it's honestly not worth the amount of time (and additional context) for the results. I've had more success prompting Claude with manageable and testable iterations.
Planning is good but get-shit-done just added too much planning in my opinion.
Every jira ticket I see now has acceptance criteria, reproduction steps, and detailed information about why the ticket exists.
Every commit message now matches the repo style, and has detailed information about what's contained in the commit.
Every MR now has detailed information about what's being merged.
Every code base in the teams around me now has 70 to 90%+ code coverage.
Every line of code now comes with best practices baked in, helpful comments, and optimized hot paths.
I regularly ship four features at a time now across multiple projects.
The MCP has now automated away all of the drudgery of programming, from summarizing emails, to generating confluence documentation, to generating slide decks.
People keep screaming that tech debt is going to pile up, but I think it's going to be exactly the opposite. Software is going to pile up because developing it is now cheap.
Most code before llms sucked. Most projects I on-boarded to were a massive ball of undocumented spaghetti, written by humans. The floor has been raised significantly as to what bad code can even look like, and fixing issues is now basically free if your company is willing to shell out for tokens.
The code hides an exception behind an if-then-else that defaults to the most common state, which isn't caught until it breaks things for the 1% of users who don't have that state.
The new feature quietly breaks a feature not covered by the acceptance tests.
The documentation is four times as long and nobody who relies on it can read it.
And I'm stuck spending my time going over tickets with a fine-toothed comb, reviewing PRs, and mentoring contributors to prevent all of this garbage from ending up in the live code.
Software to do what, though ?!
Coding, maybe 10% of a developers job (Brooks "Silver Bullet" estimates 1/6), was never the bottleneck, and even if you automated that away entirely then you've only reduced development time by 10% (assuming you are not doing human code review etc).
I would also argue that software development as a whole (not just the coding part) was also typically never the bottleneck to companies shipping product faster, maybe also not for automating their business faster (internal IT systems), since the rest of the company is not moving that fast, business needs are not changing that fast, and external factors that might drive change are not moving that fast either.
I think that when the dust settles we'll find that LLM-assisted coding has had far less impact than those trying to sell it to us are forecasting. There will be exceptions of course, especially in terms of what a lone developer can do, or how fast a software startup can get going, but in terms of impact to larger established companies I expect not so much.
One thing that I would point to today to show that the landscape is different - the average programmer/engineer/developer today has no actual admin staff. Fred Brooks' example team setup of "The Surgical Team" has more support staff than programmers. Anyone who responds to the questions like "who manages the calendar" and "who manages the documentation" will state that the engineers doing it themselves offer the best results. Same goes for designing test cases, performing rollbacks, etc.
The fact of the matter is that any self respecting engineer today works in an environment where pro-activity and self-sufficiency are prerequisites. Managing your calendar and workload, communicating to leadership and users, these are all common tasks that would have been another person a generation ago.
So when discussing writing code more efficiently and aiding in software development, what I am essentially seeing is more people trying everything they can to offload work that used to be another person's job anyway. If you care about communication - you offload coding standards. If you care about security - you offload feature refactors, and so on.
In my opinion, I think that at some point we'll either realize that we need highly competent people _and also_ regular people to help us ensure the work gets done to a good standard. Or, we will each eventually survive by working alone in a room with a suite of AI tools, and wonder why we're still making software in the first place.
One of the lesser discussed Brooks essays is actually the best description of AI-first development: the “surgical team”. It just turns out the surgeon is the only human, and like many modern surgeries, the surgeon is controlling a robot instead of operating by hand.
It would be interesting to reread The Mythical Man-Month and see how each essay applies to AI-first development.
To me, architecture starts all the way from the top - even before you write a single line of code, you do the DDD (Domain-Driven Design) and then create a set of rulesets (eg. use the domain name as table prefix) and contexts and then define the functionality w.r.t to that architecture. LLMs can do all this - only if you ask them to explicitly. So, they are pretty useful to brainstorm with, but not autonomously design reliably and push it to production with your eyes closed and support a 100,000 user base. It's a far cry from that.
But sure, you can upsell to management about the vanity metrics like lines of code and get that promotion with LLM. But, it's still not software engineering.
TL;DR its very effective as it directly tests model on REAL codebases: "The benchmark is constructed from GPL-style copyleft repositories and private proprietary codebases". The use case is very real.
It's "not software engineering" but neither was what most people writing code did before LLMs.
> Without a clear architectural pathway / direction, that code is just useless. It's not tech debt. It's just a bunch of random strings
This is pretty clearly false. It's a bunch of random strings that you can compile and run to do what you want. It's more akin to a black box. A compiled closed source dependency.
Many people are missing the fact that LLMs allow ICs to start operating like managers.
You can manage 4 streams now. Within a couple years, you may be able to manage 10 streams like a typical manager does today.
IME, LLMs don't speed you up that much if 1) you're already an expert at what you're doing (inherently not scalable), 2) you're only working on one thing (doesn't make sense when you can manage multiple streams), or 3) doing something LLMs are particularly bad it (not many remaining coding tasks, but definitely still some).
That's a failure of the existing infrastructure to allow someone to do this.
LLM coding will work like this.
If you're letting LLMs go wild with no system in place to automatically know they're moving in the right direction and "shipping" things up to your standards, the failure is you, not the LLM.
Software engineers were always creating, maintaining and updating automated business processes. In olden days we would have computers, that is rows of people computing things. That room of people is replaced with code in von Neumann machines.
The economic tension has always been a resistance to grant programmers status and class of management. Instead management wants to treat programmers like labor.
It still has nothing to do with software engineering. All good code was written by humans. AI took it, plagiarizes it, launders it and repackages it in a bloated form.
Whenever I look deeply at an AI plagiarized mess, it looks like it is 90% there but in reality it is only 50%. Fixing the mess takes longer than writing it oneself.
I think you might be in serious denial.
Of course writing code isn't the only task of a software engineer, but it's an important one.
There wouldn't be so much controversy if it wasn't the case
So you're saying software engineers don't write code? Just because there are other things that SWEs do, does not mean it has nothing to do with it.
It's arguably a pretty important part. Would you really hire a software engineer who can't code?
You wouldn't call someone an author that takes LLM outputs and shoves it in a book. IDK why this distinction doesn't apply to devs too.
Why do tech workers act shock that people hate this junk being force fed to them that they are now resorting to violence to reject said junk?
You think telling humans with specialized crafts that they don't matter is good politics? Good grief.
My workflow, at a high level, is:
1. I write a high level spec. Not as high level as a single-sentence prompt, but high level enough to capture my top requirements.
2. I prompt the AI to interview me about the spec to clear up any ambiguity or open questions, then when I’m satisfied, the AI writes a longer spec, which I then review.
3. Then I prompt the AI to write an implementation plan based on the spec. I might just skim this, and by this point I might be asking the LLM more questions than it’s asking me.
4. Now I hand it off to the implementer agent.
This isn’t cowboy coding, it’s not even agile. It’s waterfall. The problem with doing waterfall was that it’s too slow, especially with the deserialization/serialization cost of routing all of this documentation through meatbrains. The LLM is doing just as much work, true, but faster.
The thing I found surprising was that, while LLM’s are still pretty awful at writing as an art form, they are better technical writers than I have the time to be, especially when writing for an audience of other LLM’s.
We’re still going to have handwritten software, just like we still have handwritten assembly. It just won’t be the norm.
Your linter should identify all issues - including architectural and stylistic choices - and the AI agents will immediately repair them.
It's about 1000x faster than a human code at repairing its own mess.
If a linter could deterministically identify bad architecture, you wouldn't need an LLM, your linters could just write your code for you. The vibe coding takes are just getting more and more empty-headed...
linters statically check code and provide deterministic recommendations. LLMs are used to make judgement. I specifically write my linters for my project to make recommendations for LLMs.
This is how you save on token usage, so your LLMS aren't wasting tokens on static analysis that a linter could do for free.
That's at least how I make my linters.
a) that's not what a linter is built for, its a tool with very specific role
b) You must've never seen LLM expose secrets in plain text or use the most convoluted scenarios you can think of.
Well, this explains why so much software nowadays is so slow, buggy, and chaotic.
Can that happen without you? I would assume this is the next step. I don't find it either good or bad, but I'm genuinely curious where this all goes.
Maybe toward autonomous/sovereign capital with no humans in the loop, not even at the level of (asset) ownership.
Does "basically free" to you mean for you just that someone else is paying the cost? That's a mentality that has only made the world worse when applied to a wider range of things. Be hesitant in that line of thinking, I suggest, and consider the future.
Now.. the AI first engineer might still have to deal with hallucinated things. But.. they can also use the newfound cheapness of code to improve their workflow. Instead of just testing on localhost and manually deploying to prod, you can have a full dev, staging, prod pipeline for free. Tech debt can be one command from being refactored. The open source package that doesn’t quite do what you need it to do? Fork it and write a patch. The ai will be able to maintain the patch. Oh.. you need that bespoke feature for management? Np, done in a 1hr ai session.
Each of these things might be arguably insignificant on their own but net over a projects lifetime they really build up.
I.e. it's making good output better, but it's making mediocre output (which is most output) worse by adding volume and the appearance of quality, creating a new layer of FUD, stress, tedium, and unhappiness on top of the previously more-manageable problems that come with mediocre output.
I'm still seeing this even with the newest models, because the problem is the user, not the model - the model just empowers them to be even worse, in a new and different way.
The MCP has now automated away all of the drudgery of programming, from summarizing emails, to generating confluence documentation, to generating slide decks.
I wonder about the hallucination. Reading someone's writing doesn't take all that long.
Is programming supposed to suck all the time? Am I doing it wrong? I mean yeah, sure, it sucks sometimes, but overcoming that "suck" is where I feel progress and growth. If we decide to optimise that away...What the fuck am I doing here? No offence to managers, but if everybody is a manager, is anybody?
You know this is the exact same thing said during Opus 4.6, right?
That makes it hard to believe because it's the same "last week's model was so much behind you can't even comprehend" meme that's been going on throughout last year.
More info dumped into tickets and projects is great for understanding for both people and LLM. But hopefully not LLM generated.
Yeah, and for Sonnet 3.5 or even GPT4o. Because it was true for many. Different people have different timing to reach acceptance stage.
spicyusername said this exact same thing about Opus 4.6?
or is there more than one person on HN, and perhaps they have different opinions?
You're implying it's a hype train when in fact it's an adoption curve.
Is it? Or is it also explainable that the models are not getting better but people are still adopting it.
If the models were getting we’d be seeing mobile apps with new features at 10x the rate previously, or websites with 4 times the number of features. But we’re not.
Yeah, about that: I looked into Cursor's usage stats and daily I'm going through the equivalent of a bacon sandwich in my cantina, so not much, but this is at today's prices and very light usage of Sonnet.
I was for a time using Opus 4.6 for a heavier task and even then I think the cost was well into the double digit percentages of my salary.
Opus 4.7 reportedly uses more tokens overall and while they reportedly kept rates stable, that is not a given.
Just wait until, with increasing costs, the first company figures that they'll offer this as a benefit and then maybe scrap it altogether in the name of cost cutting.
Current ventures feel moreso like a pilot program (you bought a private jet, now get a couple of your pilots to actually fly it) versus having an entire fleet of jets, and having to pay salaries to all those pilots, plus account for their fuel charges.
Right now all expenses are relatively "someone else's {problem,money,infra}".
I had an idea to improve performance in one of the slowest but also one of the most critical parts of the codebase I own, so I asked Claude to re-write it. I gave it exact instructions. It got most things right but key things wrong. I caught the bugs and then asked it for some optimizations, and it came up with a number that were quite good. As I read the code, I saw more and more opportunities for improvement. To make a long story short, code that used to require upwards of 30 seconds in a particularly heinously ugly stress test now finishes in about 8ms.
My original code was terrible. That's indisputable. Maybe the bar for improvement was low. Still, the algorithms and optimizations that I was able to devise while using Claude Opus 4.6 surprised me. I don't often feel pleased with the cleverness of my work, but in this case the work really is stellar -- or at least enough of an improvement that it feels stellar.
Could I have written it without Claude? Yes, definitely. But I was able to produce the code in a few days while having a fever of 100-102, which I definitely couldn't have done on my own.
Moreover, it was plainly apparent to me, while I worked, that I was better able to think about high-level architecture and design because I wasn't stuck on the details of actually writing the code. The code itself, line by line, isn't difficult if you have familiarity with bitwise operations, but there's enough of it, with enough branches, that it's difficult as a whole and the work of writing it would have consumed much of my attention and energy.
Claude missed a huge amount. I improved performance by more than 95% after it told me there were no other opportunities for major optimizations.
Using the tool freed me, I found, to think more clearly, more deeply, and more effectively. Does the result create tech debt? I don't think so. I've pored over it and can't find anything lacking in style, design, or architecture. It's very well documented. Claude wrote tests, as I requested, for everything, including all the bugs that Claude missed and I caught. Test coverage is probably 100%, but, much more importantly, tests exhaustively cover cases, including edge cases, that would have, again, been difficult to enumerate and write by myself.
I doubt Claude could have done all this as well if the codebase and tests weren't already as mature as they are. I really wonder about the feasibility and advisability of greenfield software development with these tools. And a junior developer absolutely couldn't have accomplished what I did. The tool would have produced far worse work in the hands of someone who doesn't know what they were doing.
So I agree with you and disagree: I'm turning a corner on these tools, but I absolutely could not just let rip and trust it to do anything correctly. Moreover, I could not be less impressed by the MCPs written by people in my company. The bare tool by itself is better, though maybe that says more about my company, and my regards for the people I work with, than the tools.
While I admire your strength in attempting it, this just adds one more brick to the wall of precedents that "what's stopping you from just sending one prompt, it'll just take 30 seconds and you can do it in bed!"
You could sum it up into a simple equation as Features Shipped = Features/Hour * Developer Hours
Developer hours has remained a constant, and F/H has gone up. I am of the opinion that the ideal is the inverse.
https://somehowmanage.com/2020/10/17/code-is-a-liability-not...
Every american learns how to live with debt :)
https://www.federalreserve.gov/releases/z1/dataviz/z1/nonfin...
Engineering is hard. It's always going to be hard. I'm glad that AI makes some parts of it easier, and we (software engineers) can focus on engineering, that's nice.
Code is NEVER cheap. Just because, at current completely unrealistic AI pricing, using agents is cheaper than hiring juniors, does not make code cheap. It makes producing code cheap, which has always been low-cost. Every line of code is a cost, is a maintenance burden, is complexity. An AI, even with somehow infinite context window, will cost more money the more code you have.
Could you replace a whole team of engineers with AI? Probably, yeah. Could you simply fire everyone at your company and close it down, without much of a problem? Also probably yes, for most companies.
AIs can help with debugging, can help with writing code, with drafting designs, they can help with almost every step. The second you let OpenAI, or Anthropic, take full code ownership over your products, and you fire the last engineer, is the time when the AI pricing can go up to match what engineers make today. You've just reinvented the highly paid consultant.
Or you could take the middle-ground and hire good engineers, make sure they maintain an understanding of the codebase, and let them use whatever tools they use to get the job done, and done well. This is the way that I've seen competent companies handle it.
Relative to what?
I don't understand people dismissing the massive decrease in both cost of producing code and the speed of producing code.
Before AI, people running businesses had similar issued as people have with AI now, but the costs were much greater.
They could hire someone to write them a prototype for their idea, but it would cost them on the order of 1000s of dollars, and it would take weeks at the minimum!
Now it could cost them 20$ and be done in a few days. The feedback loop is the bottleneck.
In other words, just because more code can be produced quickly does not mean that it is cheap.
edit: I’m maybe hearing your point is that LLMs may change that POV but I think that is TBD.
Software has been such a gold mine, exactly because the maintenance are minimal when you scale, compared to the revenue. The upfront costs are expensive, but once you have software built, in most cases it's relatively cheap to maintain
“Taste”, is used in many cases, I suspect, to give a name the collection of practices and strategies developers use to keep their code and projects at a manageable level of complexity.
LLMs don’t seem to manage complexity. They’ll just blow right past manageable and keep on going. That’s a problem. The human has to stay in the loop because LLMs only build what we tell them to build (so far).
BTW, the essay that introduced the big ball of mud pattern to me didn’t hold it up as something entirely bad to be avoided. It pointed out how many projects — successful or at least on-going projects — use it, and how its passive flexibility might actually be an advantage. Big ball of mud might just be the steady state where progress can be made while leaving complexity manageable.
1. Lack of knowledge of existing conventions, usually caused by churn of developers working on a project. LLMs read very quickly.
2. Cost of refactoring existing code to meet current best practices / current conception of architecture. LLMs are ideal for this kind of mostly mechanical refactoring.
Currently, though, they don't see to be much help. I'm not sure if this is a limitation in their ability to use their context window, or simply that they've been trained to reproduce code as seen in the wild with all its flaws.
Architecture practices is how to avoid such harmful consequences. But they’re costly and often harmful themselves. So you need to know which to pick and when to start applying them. LLM won’t help you there.
Even for the well engineered stuff I suspect there is a strong bias towards standalone projects versus larger multi-component systems.
Production code. Especially production code with bugs is expensive. It can cost you customers, you can even get negative money for it in the form of law suits.
Coding agents are great for preproduction and one offs. For production I really wouldn't chance it at any scale above normal human output.
However, an extra script here or there to make your life easier, adding extra UI features based on some datapoint to your internal dashboard, ect, these were things that could've taken a few days you didn't have before to get exactly right and now they can be done with only a few minutes of attention.
Anyone with any small amount of creativity for this sort of thing could really make a big difference on improving the productivity of all sorts of team wide investigations as a running background task they have during their regular work.
Every time I open linkedin I'm scared of how many big heads have taken the wrong lesson that coding almost free == free engineering. So many bait posts asking engineers why they would need to pay them any longer, or being glad they're generating millions of lines a month....this is going to end badly.
I also keep circling around this point. So many software repositories in the AI space seem to follow a publish and forget pattern. If you simply can show that you have the patience to maintain a project, ideally with manual intervention instead of a fully autonomous AI, then you already have an outstanding project.
On the other hand, like giving a supercar to a teenager, this just enables them to get into trouble faster.
(the "my vibe coded app deleted prod!" stories are funny schadenfreude when they happen to SV startups, whose whole business is pretending to know better. When this happens to a small business who've suddenly lost all their finanacials and now maybe will lose their house, it's a tragedy. And this can happen on a much larger, not AI-related scale, like Jaguar Land Rover: https://www.bbc.co.uk/news/articles/cy9pdld4y81o )
I have friend in west Texas who does industrial electrical gear sales (like those giant spools of cable you see on tractor trailers). He’s 110% good old boy Texan but has adopted and loves AI. He says it’s been a huge help pulling quotes together and other tasks. Coincidentally he lives in Abilene where one of the stargate campuses are going. Btw, the scale of what’s being built in Abilene is like nothing I’ve ever seen.
The issue is that when you gaze long into an abyss, the abyss also gazes into you.
Many people are finding it difficult to even land internships.
The most affected areas are sysadmin, devops, and frontend. Where you'll have very hard time getting any offer.
Companies like BrowserStack are withdrawing campus placement offers.
Meanwhile, I am writing apps for my own use and have reached 10,000+ monthly active users already, even though I am making zero money from doing all this, but it's fun.
I'm glad that "10 ways to do X" submissions are allowed as long as they boost AI.
Does "boosting AI" include opening an article with "Frontier models are really good at coding these days, much better than they are at other tasks"?
"Product is really good at X, much better than at Y" does not imply that it's bad at Y, and even if it did, if you're targeting an audience that only cares about X, who gives a shit about Y? Might as well throw Y under the bus to boost the perceived effectiveness of product at X even more in comparison.
Since at least the early 80s a LOT of very important code wasn't cheap, it was free. Both free of cost (you could "just" download it and run it) but also free as freedom-respecting software.
I just don't get the argument that cheap is new. Cheap is MORE expensive than free!
Free but you're responsible for maintaining it means it's not free. It's the same issue as maintaining your own fork. It's just an ongoing cost.
(Though as AI becomes autonomous enough to be the maintainer, that cost kind of goes away. Then it's just the cost of managing the "dev".)
Short-short version, code will still be accruing value in proportion to how much of the real world it has encountered. The bottleneck on building valuable code will be how much real world there is to go around. As is so often the case, what may initially seem to kill SaaS will actually make them stronger as they end up with more exposure to the real world than some random guy's random AI code.
This is at multiple levels if you have a remote API call as a key part of your workflow/software system.
1. Price risk - might be affordable today - but what about tomorrow?
2. Geopolitical risk - your access might be a victim of geopolitics ( seems much more likely that it used to be ).
3. Model stability/change management - you've got something working at the API get's 'upgraded' and your thing no longer works.
If you are running on open weight models - you are potentially fully in control - ( even if you pay somebody to host - you'd expected there to be multiple hosting options - with the ultimate fallback of being able to host yourself ).
If anything, I would bet that next year you could get today’s flagship performance for significantly cheaper via an open-weights model.
Open-source models have caught up tremendously recently. Those who can’t or don’t want to invest a lot of money can already develop with Kimi and GLM without any problems. We don’t have to wait another year for that.
From experience, the same level of usage would have left me stranded on my CC 5 hr limit within an hour.
There were some difficulties with tool calls, in particular with replacing tab-indented strings - but taking no steps to mitigate that (which meant the model had to figure it out every time I cleared context) only cost relatively few extra tokens -- and it still came in well under 4.6, nevermind 4.7. And of course, I can add instructions to prevent churning on those issues.
I have no reason to go back to anthropic models with these results.
"No moat" indeed.
I expect tomorrow’s models will be so much more capable that we will happily pay more.
But if not, we will still likely get today’s capabilities or more for cheap.
I don’t see a realistic scenario in which the AI genie is going back into the bottle because of affordability.
It seems like wishful thinking by people who dislike the new paradigm in software engineering.
(Timeframes are hyperbolical).
I'm not all gloom and doom but the treatment of junior engineers is something I think we will either regret or rejoice. Either will have a spur of creative people doing their own independent thing or we'll have lost a generation of great engineers.
We’ve been coasting along on a single generation who have ruled with iron fists.
If you fire all your SWEs they won't sit around twiddling their thumbs waiting for an AI collapse, they'll career shift. Maybe to an unemployment line and/or homelessness, maybe to something else productive, but either way they'll lose SWE skills.
If you close down all the SWE junior positions you'll strongly discourage young people training in the field. They'll do something else.
Then if you want to go back, who will you hire for it?
They are large language models. Not automated development machines. They hallucinate.
The goal post has not shifted since 2023 or so. Make an LLM that doesn't blatantly disregard knowledge it has, instructions it has been giving, over and over, and you win. If trillions of USD of investment can't do it, I'd be curious to see what can.
If the AI is not good enough, then don't fire the devs. If/when the devs are no longer needed, I don't see why the need would return later, that was my point.
If that was the case companies could just have their project managers managing Claude Code instead of developers, and they would immediately realize that using Claude Code to develop software is just as complex and geeky as it ever was - nothing changed in that regard.
A harness and a bunch of skills is just the new "think step by step" prompting technique. Don't just let the LLM rip and write a bunch of code, but try to get it to think before coding, avoid things like churning the code base for no reason, and generally try to prompt it to behave more like a developer not an LLM. Except it still is an LLM.
A coding agent is really not much different to a chat "agent" in this regard. You've got the base LLM then a system prompt trying to steer it to behave in a certain way, always suggest "next step", keep to a consistent persona, etc. None of this actually makes the LLM any smarter or turns it into a brilliant conversationalist, anymore than the coding agent giving the LLM a system prompt magically turns it into a software developer.
The consumer space is about extracting every ounce of personal data possible.
The b2b space is about "maximizing customer value" - that is, not maximizing the value of your product to the customer, but maximizing the value of the customer to your business. Lock them in and lock them down, make your product "sticky" so they can't leave without immense cost.
Company brain drain, knowledge leaves with your seniors if you decide to get rid of them, or they just leave due to the conditions AI creates.
I don't know if the above comes to fruition, there's a lot of questions that only time will answer. But those are my first thoughts.
But the idea that some code is cheap and some code is expensive is not new.
The only new thing is there are some adjustments on how to asses the value of the code you’re presently, or about to, work on.
AI has absolutely expanded the set of code that is cheap and if you can make a thing easily with AI then so can someone else. That project is unlikely to result in valuable code. Which is not to say it doesn’t have utility. Just its monetary value is low.
Instead of focusing on whether you can build it, the scarcer resource becomes whether you should build it. And most teams lack a clear process for addressing this latter question. Requirements are collected in all sorts of places without ever being prioritized in an organized fashion. This is exacerbated by cheaper code. With cheaper code, you can release five times what you used to be able to release in a given period of time, but only if you knew which five products you needed.
The thing I see from agentic adoption that I find lamentable as a software engineer is that timeline expectations have collapsed to absurdity. You can plan a project to do a major migration, do all the estimations on how long something will take, and if you give an answer that says weeks and cite the evidence, product and leadership will now claim it should take days, citing their ai's design.
It's exhausting. Even if you are an expert, you now have lost the implicit trust that came from years of building political capital, shipping efficiently, and delivering value for multiple companies, because a different prompt with different context from the one you provide gave a different answer than what you did.
During delivery, if you read your code produced line-by-line and review for correctness, and put in additional guardrail automations that slow the automated build, and ship 4 times a day with a defect rate of 5.4% with agentic coding, you are compared unfavorably to teams with a change defect rate of 15.7% that ship 13 times per day, because you are too slow.
And you are individually compared with whole team outputs. Even if you deliver at a rate ten times greater than the worst contributor at your company, if you are not outputting code at the rate of an entire team of 5, you are not meeting the expectations of product and leadership anymore.
All of this is to say, yes, people are looking at software engineers as both the bottleneck and unnecessary, even at high technology companies, right now. They are looking at them that way because they have their own agents that are biased to think that the engineering claims are wrong and agents are sycophantic.
There is a difference between:
- write code, write tests
And
- write tests, write code
Had another agentic (vibe) coding experience, which confirmed that for me. Creating an SDK for a $500 light so I can control it from my Steam Deck instead of my phone (no SDK existed before yesterday). For anyone interested, I'm teaching my vibe coding (I meant agentic) tutorial at pycon next week. The 3-hour-long version should be posted to YouTube soon thereafter.
this means code also written by 'A.i'.
seems as an industry we're hellbent on optimizing for the wrong-thing.
Make usable software. Cheap code means that you can create a lot more prototypes to then perform usability tests by finding a user and sitting next to them. I mostly worked on internal apps lately, so perhaps it's much easier for me to do than it is for some others.
Once upon a time, highly bureaucratic organizations tried to make a distinction between "analyst", "programmer" and "coder": https://cacm.acm.org/opinion/the-myth-of-the-coder/
The pure "coder" role, per that paper, died out almost immediately. Nowadays it's done by compilers (a deterministic automation). The distinction between analyst and programmer held out a bit longer - ten years ago I was working somewhere that had "business analysts", essentially requirements-wranglers. It's possible that the "programmer" job of converting a well-defined specification into a program is also going to start disappearing.
.. but that still leaves the specification as the difficult bit! It remains like the old stories with genies: the genie can give you what you ask for. But you need to be very sure what you want, very clear about it, and aware that it may come with unasked-for downsides if you're not.
I’m not convinced about rebuilding repeatedly as a learning tool though. As relatively quick as it is, it over emphasizes the front line problems you face early. Those tend to be simpler, more straightforward issues that can be more quickly solved by a few minutes of thought (and more cheaply too).
Hold on, I better write this down, this is good stuff..
It is slower than when I was just using Claude directly though.
[1] https://github.com/gsd-build/get-shit-done
Planning is good but get-shit-done just added too much planning in my opinion.
[1] https://github.com/gsd-build/gsd-2
Buy in bulk and resell. /s