> If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
At that point, outside of FAANG and their salaries, you are spending more on AI than you are on your humans. And they consider that level of spend to be a metric in and of itself. I'm kinda shocked the rest of the article just glossed over that one. It seems to be a breakdown of the entire vision of AI-driven coding. I mean, sure, the vendors would love it if everyone's salary budget just got shifted over to their revenue, but such a world is absolutely not my goal.
This is an interesting point but if I may offer a different perspective:
Assuming 20 working days a month: that's 20k x 12 == 240k a year. So about a fresh grad's TC at FANG.
Now I've worked with many junior to mid-junior level SDEs and sadly 80% does not do a better job than Claude. (I've also worked with staff level SDEs who writes worse code than AI, but they offset that usually with domain knowledge and TL responsibilities)
I do see AI transform software engineering into even more of a pyramid with very few human on top.
Important too, a fully loaded salary costs the company far more than the actual salary that the employee receives. That would tip this balancing point towards 120k salaries, which is well into the realm of non-FAANG
It would depend on the speed of execution, if you can do the same amount of work in 5 days with spending 5k, vs spending a month and 5k on a human the math makes more sense.
> That idea of treating scenarios as holdout sets—used to evaluate the software but not stored where the coding agents can see them—is fascinating. It imitates aggressive testing by an external QA team—an expensive but highly effective way of ensuring quality in traditional software.
This is one of the clearest takes I've seen that starts to get me to the point of possibly being able to trust code that I haven't reviewed.
The whole idea of letting an AI write tests was problematic because they're so focused on "success" that `assert True` becomes appealing. But orchestrating teams of agents that are incentivized to build, and teams of agents that are incentivized to find bugs and problematic tests, is fascinating.
I'm quite curious to see where this goes, and more motivated (and curious) than ever to start setting up my own agents.
Question for people who are already doing this: How much are you spending on tokens?
That line about spending $1,000 on tokens is pretty off-putting. For commercial teams it's an easy calculation. It's also depressing to think about what this means for open source. I sure can't afford to spend $1,000 supporting teams of agents to continue my open source work.
That's a genuine problem now. If you launch a new feature and your competition can ship their own copy a few hours later the competitive dynamics get really challenging!
My hunch is that the thing that's going to matter is network effects and other forms of soft lockin. Features alone won't cut it - you need to build something where value accumulates to your user over time in a way that discourages them from leaving.
I recently passed 40,000 but my Substack is free so it's not a revenue source for me. I haven't really looked at who they are - at some point it would be interesting to export the CSV of the subscribers and count by domains, I guess.
My content revenue comes from ads on my blog via https://www.ethicalads.io/ - rarely more than $1,000 in a given month - and sponsors on GitHub: https://github.com/sponsors/simonw - which is adding up to quite good money now. Those people get my sponsors-only monthly newsletter which looks like this: https://gist.github.com/simonw/13e595a236218afce002e9aeafd75... - it's effectively the edited highlights from my blog because a lot of people are too busy to read everything I put out there!
At that point, outside of FAANG and their salaries, you are spending more on AI than you are on your humans. And they consider that level of spend to be a metric in and of itself. I'm kinda shocked the rest of the article just glossed over that one. It seems to be a breakdown of the entire vision of AI-driven coding. I mean, sure, the vendors would love it if everyone's salary budget just got shifted over to their revenue, but such a world is absolutely not my goal.
Assuming 20 working days a month: that's 20k x 12 == 240k a year. So about a fresh grad's TC at FANG.
Now I've worked with many junior to mid-junior level SDEs and sadly 80% does not do a better job than Claude. (I've also worked with staff level SDEs who writes worse code than AI, but they offset that usually with domain knowledge and TL responsibilities)
I do see AI transform software engineering into even more of a pyramid with very few human on top.
And it might be the tokens will become cheaper.
This is one of the clearest takes I've seen that starts to get me to the point of possibly being able to trust code that I haven't reviewed.
The whole idea of letting an AI write tests was problematic because they're so focused on "success" that `assert True` becomes appealing. But orchestrating teams of agents that are incentivized to build, and teams of agents that are incentivized to find bugs and problematic tests, is fascinating.
I'm quite curious to see where this goes, and more motivated (and curious) than ever to start setting up my own agents.
Question for people who are already doing this: How much are you spending on tokens?
That line about spending $1,000 on tokens is pretty off-putting. For commercial teams it's an easy calculation. It's also depressing to think about what this means for open source. I sure can't afford to spend $1,000 supporting teams of agents to continue my open source work.
I wonder what the security teams at companies that use StrongDM will think about this.
My hunch is that the thing that's going to matter is network effects and other forms of soft lockin. Features alone won't cut it - you need to build something where value accumulates to your user over time in a way that discourages them from leaving.
My content revenue comes from ads on my blog via https://www.ethicalads.io/ - rarely more than $1,000 in a given month - and sponsors on GitHub: https://github.com/sponsors/simonw - which is adding up to quite good money now. Those people get my sponsors-only monthly newsletter which looks like this: https://gist.github.com/simonw/13e595a236218afce002e9aeafd75... - it's effectively the edited highlights from my blog because a lot of people are too busy to read everything I put out there!
I try to keep my disclosures updated on the about page of my blog: https://simonwillison.net/about/#disclosures