I've sort of lost some respect for ed that I had early on in the hype cycle - he's still right about some things, but I can see him slowly and subtly retreating from his strong position, held even a few months ago, that these things will never ever be useful for anything and it's all a scam because they don't actually do anything at all except burn money. He would say it like 8 times a monologue. I remember one podcast maybe ~6 months ago he brought a developer skeptic on, and was trying to get him to say it wasn't actually useful for coding, and the dev was like "maybe not as advertised, but I definitely use it and it is useful to me" and he pivoted off the topic very quickly.
It seems he realizes he was wrong about that and has pivoted slowly to, "well, maybe they work sometimes, but the cost isn't justified." Which is a reasonable question! I just find his style of never admitting when he is wrong off putting and the way he presents things as absolute fact, when he's guessing like the rest of us. He was right about a lot, wrong about a lot, it's okay to admit that, I don't think his fan base would care.
There's a few major problems with the article. The most obvious is that frontier labs are not charging remotely close to the cost of tokens; afaik most estimate north of 80% profit margins. As a reference, providers are profitably providing Kimi K2.6 for $4/1Mtok out. Is that as good as Opus? No, but it's probably at least Sonnet level, so that's ~4x cheaper than Sonnet while still being profitable to serve on the margin. So you aren't plausibly getting into actual subsidization territory until you're over 5:1 sub to nameplate token costs.
How many tokens can you realistically burn through in one chat session? Opus and many other frontier models do maybe 60tok/s, less 250k/hr out. In you can use more, but in most cases cache is 5-10:1 cheaper than new input. Say you average 500ktok in, 90% cache, per request. That amounts to 100-150ktok in new input-equivalent costs, which in most cases is ~20-30ktok in output-equivalent costs. Do a request every minute, that's a total of about 1.5-2Mtok/hr. At API prices that's $50/hr for Opus, but really it probably only costs Anthropic $10/hr to serve that.
That said, even if a developer is burning $50/hr, many, many employees at large companies cost more than $100k/yr to employ all costs considered, so making them say 20-30% more productive can easily make that worth it for most. If the labs shave their margins ultimately to more like 20-30%, you'd have ~$15/hr in costs to use the services, and nearly every white collar job is way over 30k/yr to employ. If your salary is 80k, you probably cost the company 200k all in, so making you 15% more productive offsets the $15/hr cost.
So first party providers are not in a horrifying position or anything from a subsidization standpoint. The people in bad shape are Cursor and Perplexity, who don't have frontier models and are dependent on the open source community, which is typicly 6-12 months behind the frontier. They have to pay full freight API costs at 80% margin for the big boys to serve their harnesses, which is indeed untenable, and they'll have to either force users to use open source models and/or in house models they can serve at-cost or they will have to charge vastly more.
Gemini, Claude, and ChatGPT first-party services like Antigravity, Codex, and Claude Code are not in serious trouble though.
It's not even a fixed cost per token (even though it's billed that way, and that's still miles better than a fixed-price all you can eat). You're incurring a cost that's proportional to generated tokens times the context for each (plus the prefill cost for any uncached input), so the expense grows quadratically with your average generated context.
This all becomes extremely visible when trying to do agentic coding with local language models - you quickly realize that controlling context length and model size is just as important as avoiding wasted effort. The real scam is not AI Q&A ala ChatGPT, that's actually quite viable - though marginally less so as conversations grow longer. It's agentic coding with SOTA models and huge contexts.
It makes me wonder if I have been living under a rock, because I have never heard of frontier labs making money. AFAIK all AI firms are simply burning money to acquire customers at this stage. Is this wrong?
>It makes me wonder if I have been living under a rock, because I have never heard of frontier labs making money.
You're confusing the profit from the marginal token and overall profit. The comment you're replying to is calculating that AI labs are probably making a substantial profit per paid token. It's just that so far that profit has not been able to overcome the ongoing R&D and capex costs.
If the price of everything would go down it wouldn't be too concerning and everybody would be on board with the "beauty" of it.
What seems to actually be happening for white collar workers is that the price they can charge for their labor is dropping, but the price of their expenses (housing, food, gas) continues to rise.
> That said, even if a developer is burning $50/hr, many, many employees at large companies cost more than $100k/yr to employ all costs considered, so making them say 20-30% more productive can easily make that worth it for most. If the labs shave their margins ultimately to more like 20-30%, you'd have ~$15/hr in costs to use the services, and nearly every white collar job is way over 30k/yr to employ. If your salary is 80k, you probably cost the company 200k all in, so making you 15% more productive offsets the $15/hr cost.
Nobody including the connected article is making the argument that this cannot be profitable ever. People are saying "there is no way this admittedly quite interesting tool is going to be able to make back all of this money" and I think they are completely right to say that.
You can absolutely make money with this stuff, just not at this scale. The buildout for this shit has been certifiably crazy and a number of the involved firms are overleveraged for tens and even hundreds of billions of dollars.
How in the sweet fuck are you paying that off, plus giving investors dividends, selling this at $15/hour/user??? That math does not math. A quick google says there are between 1.5 and 4.4 million developers in the US alone, let's say it's 5 million, to be generous, and each of them is subbed to this for 8 hours per day, continuously. That's 600 million per year in revenue. If you took ALL that revenue, and put it towards paying down this debt, not leaving any for employee salaries, upkeep, ongoing development, it would take DECADES to pay down what OpenAI already owes.
And yes I'm sticking directly to code, because that's the only thing I've seen it be really good at. Are we really proposing that every knowledge worker on earth and every manager of such workers is going to have an autonomous agent running all the time!? To do what, make sure they don't have to read or write email? Which even just that example is bringing in a fucking mess of legal, compliance, and security violations because LLMs are not intelligent and are not capable of being properly secured.
Like I'm sorry, I cannot take this industry seriously when even the most basic back-of-napkin math is saying, nay, screaming from the rooftops that they are FUCKED.
> selling this at $15/hour/user??? That math does not math. A quick google says there are between 1.5 and 4.4 million developers in the US alone, let's say it's 5 million, to be generous, and each of them is subbed to this for 8 hours per day, continuously. That's 600 million per year in revenue
That math is not mathing. $15/hour/user, with 5M devs, 8hrs and 240 working days per year that is 144B in revenue.
Quite right, honestly not sure how I fucked that up so bad but I'll own it. Okay so all we need is every coder + 0.6 million more or so in the United States, subscribed to this for 8 hours a day, and the business model can work.
That still feels incredibly optimistic given how split the community at large seems to be about how good this tech is, and it assumes all those developers also all work for firms large enough to pay for all of that.
However we are still very much in back of napkin math. We haven't even gone into what it costs to provide these services, how much it's going to cost yet for all these datacenters to be built, how much electricity and water they're going to rip through, their own employees and basic overhead, and all the rest. So IMO, we've now elevated it from "hopeless" to "this could work if a whole lot of other things line up really well."
Yes, the GP wrote the wrong unit on this place. That supports his conclusion that the pay-off would take decades, if it was actually per year, it would take several centuries.
Reading this piece, I'm reminded of a podcast I heard some years ago where they were interviewing an early google marketing employee who was talking about the economics of google search. They said they'd done some surveys and concluded that they determined that the average user would get something like $20/year of value, and so that was the most they could realistically charge for search. Meanwhile, they could make something like $500/user in Q4 alone for advertising. So, of course, advertising.
I just don't think that LLM business models can survive the allure of advertising dollars, any more than Search could, or TV, or Radio, or Movies. Ignoring the talk of copilot putting ads into pull requests, there is just no way that publicly hosted LLMs will not end up inserting ads into the output.
The output won't be read by humans (and increasingly this is the case in my own use) so I don't see how that works. If the output itself will be directed by the highest bidder, that doesn't work. Or if the output influences the agent's direction, that doesn't work either.
The entire basis of this article is that generating tokens is a variable cost and that that cost will not decrease over time.
> On an economic basis, a monthly subscription only makes sense with relatively static costs.
Running a data center is a fixed expense. Whether or not people use that data center to it's capacity doesn't change how much the operator pays (electricity use factors into this, since a GPU running at 100% will use more watts than an idle one, but it doesn't move the needle much on other fixed and variable costs of a data center).
> They also assumed, I imagine, that the cost of tokens would come down over time, versus what actually happened — while prices for some models might have come down, newer “reasoning” models burn way more tokens, which means the cost of inference has, somehow, gotten higher over time.
This is backwards. When the cost of something goes down, people use it more. This is basic supply and demand. Inference has gotten cheaper already, and will continue to do so.
Companies subsidizing costs for growth happens all the time. Yes, switching to usage-based pricing instead of subscriptions sucks for customers, but enterprises will continue to pay.
>At some point, the incredible, toxic burn-rate of generative AI is going to catch up with them, which in turn will lead to price increases, or companies releasing new products and features with wildly onerous rates (..) that will make even stalwart enterprise customers with budget to burn unable to justify the expense.
I pray this happens soon, but I feel I've been hearing some version of it for a while.
The only reason it hasn't is the sheer amount of credit being thrown at this tech. Both that and the valuations of the firms in question is stratospherically over-hyped and over-valued.
This tech has uses. It has quite a lot of them in fact. However there is no usage of ChatGPT or Claude that makes OpenAI or Anthropic worth anything fucking close to what they're valued at right now, and both firms are scrambling to figure out how to get down from the top of the AI house of cards without detonating in the process.
Meanwhile DeepSeek is coming out with more capable models that run on far less onerous hardware and with far less compute requirements that does basically exactly what the vast majority of users actually want it to do.
This is going to be a financial bloodbath. Not for anyone actually responsible for it, of course, they'll be fine. It'll be everyone else getting soaked which is the only reason I give two shits.
I think there's another route this goes. At $7k a year or more per eng in token use, I think it's very reasonable to buy engineers machines with obscene GPUs and RAM and run models locally. And if it doesn't make sense now, someone will figure it out and save companies $10k+/eng over 3 years.
The general problem the average user has with a metered instead of provisioned billing model for computer services is you can’t easily control for cost overruns. From the old days customers getting stung for hosting costs when slashdotted or DOSed, to last decades microservice shock horror of the CI retry loop that burns money overnight to today’s AI that you basically have no idea how efficient the AI will be while it ponders your question, you are just setting yourself up for disappointment and cost overruns and a feeling that you’re not getting the value for money you got last week etc.
>The general problem the average user has with a metered instead of provisioned billing model for computer services is you can’t easily control for cost overruns.
Is this an actual issue aside from people letting their autonomous agents run overnight?
I can speak of myself. Sometimes my session starts out well and I get the AI to cruise to 80%. But then gains after that seem impossible and what was built steadily unravels and then I get the compacting conversation message and realise that I’ve just spent a lot of money on nothing.
I read that and I found it unconvincing. KP is correct that EZ is, by now, emotionally and perhaps ideologically fixated on AI's approaching reckoning, but that's KP psychologizing that EZ is psychologizing, which is neither fruitful nor relevant to consider.
EZ might have incautiously and incorrectly called the peak several times, but his newsletter is nearly always stacked with citations and insights that, at least to my cursory but frequent inspection, pan out.
His arguments have evolved over time, but what of it? That just shows he's not the dogmatist the author wants him to be. Discourse evolves, get over it.
2026 Zitron has a good sense of the scale at which AI is requiring enormous financial complexity and volume to realize, and his basic point is that it isn't sustainable in the medium term.
- Reproduce academic papers
- Put coding projects online for me so I can share them with friends
- Determine which books in a set are missing from the school library and find where they’re cheapest online
- Figure out which soccer club the team I see practicing at the local rec center belongs to and how to register my son
- Design a bunch of robot-themed handwriting activities for a kindergartner who needs to practice making his uppercase and lowercase letters distinct
I'm sorry but telling me that this is what AI can do is a sad state of affairs. Like this is google level stuff.
I thought this burning of cash was all an excuse for the exponential growth we saw in the last 6 years.
They went from GPT 2 a text only, goldfish-esque memory at a 8th grade reading level to what we have today, GPT 5, multimodality + a token window encompassing a enclyopedia and a Doctorate/Masters level of mastery in major subjects.
The economics are probably betting on this exponential growth to continue, which if it fails, the cash would burn.
It make sense if you account for cost of intelligence getting cheaper every year. Most of the models per unit of intelligence is getting far cheaper. We get better hardware, architecture, training techniques, inference optimizations and caching. All those improvements add up. In in early 2022 you were getting 10x cheaper annually now is closer to 2x - 5x cheaper annually. The cost is still dropping where as Uber can only get the cost down by so much.
Customer: “I don’t want to pay more than $100/mo for my website”
Developer: “What are your goals?”
Customer: “1M daily visits, 1,000 monthly signups.”
And we've spent the past 25 years offering serverless compute, auto-scaling, pay-as-you-go for AWS and Internet infrastructure. And the economics are still a hard sell.
Do we know the breakdown of revenue from API vs subscriptions for OAI/Anthropic? That seems very relevant, since this entire article seems to be on the premise that users are only willing to pay for a subsidized subscription and would never pay the 'true' token cost.
The internet seems to be saying that 70%+ of Anthropic revenue is per-token metered API, which would largely invalidate the article, but I can't find a solid source.
Yeah. And weird pricing seems like it's winding down.
It's interesting to compare it to electricity. Basically Anthropic was selling a flat fee electricity subscription, and when someone started connecting expensive washing machines (OpenClaw) to their subscriptions, instead of changing the pricing model, they banned washing machines...
I wonder if we will get to "electricity" style pricing for AI. What makes electricity predictable is relatively constant average usage over time + price is manageable. I'm just not buying electrical house heating and manage my electricity spending within some bounds.
With AI the problem is that we are only now getting to useful AI, and for now it's still too expensive to be useful, so they subsidize until they can stabilize at "cheap enough and smart enough" level. But it feels like that's still 2 years away while they are stopping to subsidize now. Will be interesting.
>Basically Anthropic was selling a flat fee electricity subscription
No? It was flat, but with ambiguously stated limits (eg. 5x, 10x 20x). They were discriminating on how the "electricity" was used, but that's not that much different than how power companies have different rates for residential users vs industrial users.
Even now they are insanely ambiguous with respect to their usage limits. They don't from what I know openly disclose them anywhere, so them saying "5x increase" is utterly meaningless, alongside "20x" or "10x" or whatnot, because we don't know what "x" is.
The moves from “the subscription model for AI isn’t working given these parameters” to “a subscription model for AI can never work” to “the model was deliberately deceptive” to “it’s a fucking ripoff” is not logical. AI companies are feeling the need to get hold of spiraling costs by increasing prices and limitations. Inference hasn’t gotten cheap enough fast enough, and for some reason they feel they can’t wait longer. That doesn’t mean a subscription service can’t work: only that it will be expensive, maybe vastly so, and will need tiers based on usage with some fluidity for users to move between tiers in a given month. The model is something like HP’s “instant ink” service. Sure, there’s a question whether the moves companies are making now are worth the cost in the eyes of customers. But that’s a question of economics and timing, not a fundamental blow to monthly subscriptions as a model. The article doesn’t deal with these considerations fairly. It’s too much in the direction of a rant, with conspiracy theories thrown in.
I'm just flabbergasted at the massive inefficient usage of tokens. What are people doing to spend 500 usd/day in tokens. I just don't understand what you could possibly be doing that would be not complete spagetti at the end if you run something in an autoloop.
Using Claude code with Opus 4.7 and xhigh effort for a few hours will definitely cost hundreds of usd.
I am not sure if you would call claude code "an auto loop", but you don't need to be running something crazy like gas town to spend a lot of tokens with Claude.
I am a paying subscriber to Ed Zitron and I enjoy his writing a lot. He should at some point admit that not everything is bullshit and there is definitely a business model to it. It is fun to read, though
He has a fun writing style but has so many willful errors, and is so committed to one point of view regardless of the facts, that his writing seems kind of worthless.
I soured on him when he could not calculate cumulative revenue on an exponential curve, ignored everyone who showed him how to calculate it, and then kept writing that Anthropic’s revenue numbers are fake based on his inability to do math.
It’s too bad because any heavily hyped industry needs good critics (think Ida Tarbell to Rockefeller) but they should be honest critics, and he’s not, which really undermines not only his but others’ criticism of the industry.
It's good to have contrarian viewpoints, but Ed Zitron is so blinded by his AI hate that his articles should be treated not just with skepticism, but heavy suspicion.
meh - by this logic, every new tech and startup ever is a "scam"
The truth is that the AI companies are gambling that inference cost will continue following a hyper version of Moore's Law, e.g. Google TurboQuant.
The countervailing thesis is that frontier models are consuming more and more compute.
The deepest truth: you often don't need a frontier model to get commercially acceptable results from AI. Thus, bring on the true pricing! and I'll just switch models to something financially sustainable.
We work comes to mind. The math is fairly easy if we know what a company like OpenAI's datacenter commitments are, what their sub and token revenue is right now and what their operation costs are. This is very basic and if you had that info you would know exactly if we are in bubble or not. Waiting for the S-1's...
Zitron misunderstands the economics of models. Inference costs have dropped 99% in less than 2 years. Models are being commoditized faster than any technology in history.
A $20 subscription 2 years ago is not providing the same level of intelligence you're getting today.
Every major lab knows open source models are 6 months behind (See Google's "We have no moat") and none of them plan to make money on inference. Companies are subsidizing users to create moats that persist when models are essentially free for most everyday use.
It seems he realizes he was wrong about that and has pivoted slowly to, "well, maybe they work sometimes, but the cost isn't justified." Which is a reasonable question! I just find his style of never admitting when he is wrong off putting and the way he presents things as absolute fact, when he's guessing like the rest of us. He was right about a lot, wrong about a lot, it's okay to admit that, I don't think his fan base would care.
How many tokens can you realistically burn through in one chat session? Opus and many other frontier models do maybe 60tok/s, less 250k/hr out. In you can use more, but in most cases cache is 5-10:1 cheaper than new input. Say you average 500ktok in, 90% cache, per request. That amounts to 100-150ktok in new input-equivalent costs, which in most cases is ~20-30ktok in output-equivalent costs. Do a request every minute, that's a total of about 1.5-2Mtok/hr. At API prices that's $50/hr for Opus, but really it probably only costs Anthropic $10/hr to serve that.
That said, even if a developer is burning $50/hr, many, many employees at large companies cost more than $100k/yr to employ all costs considered, so making them say 20-30% more productive can easily make that worth it for most. If the labs shave their margins ultimately to more like 20-30%, you'd have ~$15/hr in costs to use the services, and nearly every white collar job is way over 30k/yr to employ. If your salary is 80k, you probably cost the company 200k all in, so making you 15% more productive offsets the $15/hr cost.
So first party providers are not in a horrifying position or anything from a subsidization standpoint. The people in bad shape are Cursor and Perplexity, who don't have frontier models and are dependent on the open source community, which is typicly 6-12 months behind the frontier. They have to pay full freight API costs at 80% margin for the big boys to serve their harnesses, which is indeed untenable, and they'll have to either force users to use open source models and/or in house models they can serve at-cost or they will have to charge vastly more.
Gemini, Claude, and ChatGPT first-party services like Antigravity, Codex, and Claude Code are not in serious trouble though.
This all becomes extremely visible when trying to do agentic coding with local language models - you quickly realize that controlling context length and model size is just as important as avoiding wasted effort. The real scam is not AI Q&A ala ChatGPT, that's actually quite viable - though marginally less so as conversations grow longer. It's agentic coding with SOTA models and huge contexts.
I've used single digit billions in a couple days, FWIW.
This seems to be the lynchpin of your argument.
It makes me wonder if I have been living under a rock, because I have never heard of frontier labs making money. AFAIK all AI firms are simply burning money to acquire customers at this stage. Is this wrong?
You're confusing the profit from the marginal token and overall profit. The comment you're replying to is calculating that AI labs are probably making a substantial profit per paid token. It's just that so far that profit has not been able to overcome the ongoing R&D and capex costs.
do you think per token prices will go up or down in the long term? will the price per task trend down or up?
what about the price of human labor?
What seems to actually be happening for white collar workers is that the price they can charge for their labor is dropping, but the price of their expenses (housing, food, gas) continues to rise.
Nobody including the connected article is making the argument that this cannot be profitable ever. People are saying "there is no way this admittedly quite interesting tool is going to be able to make back all of this money" and I think they are completely right to say that.
You can absolutely make money with this stuff, just not at this scale. The buildout for this shit has been certifiably crazy and a number of the involved firms are overleveraged for tens and even hundreds of billions of dollars.
How in the sweet fuck are you paying that off, plus giving investors dividends, selling this at $15/hour/user??? That math does not math. A quick google says there are between 1.5 and 4.4 million developers in the US alone, let's say it's 5 million, to be generous, and each of them is subbed to this for 8 hours per day, continuously. That's 600 million per year in revenue. If you took ALL that revenue, and put it towards paying down this debt, not leaving any for employee salaries, upkeep, ongoing development, it would take DECADES to pay down what OpenAI already owes.
And yes I'm sticking directly to code, because that's the only thing I've seen it be really good at. Are we really proposing that every knowledge worker on earth and every manager of such workers is going to have an autonomous agent running all the time!? To do what, make sure they don't have to read or write email? Which even just that example is bringing in a fucking mess of legal, compliance, and security violations because LLMs are not intelligent and are not capable of being properly secured.
Like I'm sorry, I cannot take this industry seriously when even the most basic back-of-napkin math is saying, nay, screaming from the rooftops that they are FUCKED.
That math is not mathing. $15/hour/user, with 5M devs, 8hrs and 240 working days per year that is 144B in revenue.
Of course people don't work every day, but even with European-level holidays that number is off by a factor of 240 or so.
That still feels incredibly optimistic given how split the community at large seems to be about how good this tech is, and it assumes all those developers also all work for firms large enough to pay for all of that.
However we are still very much in back of napkin math. We haven't even gone into what it costs to provide these services, how much it's going to cost yet for all these datacenters to be built, how much electricity and water they're going to rip through, their own employees and basic overhead, and all the rest. So IMO, we've now elevated it from "hopeless" to "this could work if a whole lot of other things line up really well."
According to your math, that's $600 million per day
I just don't think that LLM business models can survive the allure of advertising dollars, any more than Search could, or TV, or Radio, or Movies. Ignoring the talk of copilot putting ads into pull requests, there is just no way that publicly hosted LLMs will not end up inserting ads into the output.
This looks like what I remember. https://freakonomics.com/podcast/is-google-getting-worse/
> On an economic basis, a monthly subscription only makes sense with relatively static costs.
Running a data center is a fixed expense. Whether or not people use that data center to it's capacity doesn't change how much the operator pays (electricity use factors into this, since a GPU running at 100% will use more watts than an idle one, but it doesn't move the needle much on other fixed and variable costs of a data center).
> They also assumed, I imagine, that the cost of tokens would come down over time, versus what actually happened — while prices for some models might have come down, newer “reasoning” models burn way more tokens, which means the cost of inference has, somehow, gotten higher over time.
This is backwards. When the cost of something goes down, people use it more. This is basic supply and demand. Inference has gotten cheaper already, and will continue to do so.
Companies subsidizing costs for growth happens all the time. Yes, switching to usage-based pricing instead of subscriptions sucks for customers, but enterprises will continue to pay.
I wonder what the rough costs of a data center look like over the lifetime of one GPU generation?
10% building
60% GPU
30% power
I haven't gone looking for that information, but I haven't run across it either.
I pray this happens soon, but I feel I've been hearing some version of it for a while.
This tech has uses. It has quite a lot of them in fact. However there is no usage of ChatGPT or Claude that makes OpenAI or Anthropic worth anything fucking close to what they're valued at right now, and both firms are scrambling to figure out how to get down from the top of the AI house of cards without detonating in the process.
Meanwhile DeepSeek is coming out with more capable models that run on far less onerous hardware and with far less compute requirements that does basically exactly what the vast majority of users actually want it to do.
This is going to be a financial bloodbath. Not for anyone actually responsible for it, of course, they'll be fine. It'll be everyone else getting soaked which is the only reason I give two shits.
Is this an actual issue aside from people letting their autonomous agents run overnight?
EZ might have incautiously and incorrectly called the peak several times, but his newsletter is nearly always stacked with citations and insights that, at least to my cursory but frequent inspection, pan out.
His arguments have evolved over time, but what of it? That just shows he's not the dogmatist the author wants him to be. Discourse evolves, get over it.
2026 Zitron has a good sense of the scale at which AI is requiring enormous financial complexity and volume to realize, and his basic point is that it isn't sustainable in the medium term.
He is self-evidently correct.
I'm sorry but telling me that this is what AI can do is a sad state of affairs. Like this is google level stuff.
They went from GPT 2 a text only, goldfish-esque memory at a 8th grade reading level to what we have today, GPT 5, multimodality + a token window encompassing a enclyopedia and a Doctorate/Masters level of mastery in major subjects.
The economics are probably betting on this exponential growth to continue, which if it fails, the cash would burn.
Customer: “I don’t want to pay more than $100/mo for my website” Developer: “What are your goals?” Customer: “1M daily visits, 1,000 monthly signups.”
And we've spent the past 25 years offering serverless compute, auto-scaling, pay-as-you-go for AWS and Internet infrastructure. And the economics are still a hard sell.
The internet seems to be saying that 70%+ of Anthropic revenue is per-token metered API, which would largely invalidate the article, but I can't find a solid source.
It's interesting to compare it to electricity. Basically Anthropic was selling a flat fee electricity subscription, and when someone started connecting expensive washing machines (OpenClaw) to their subscriptions, instead of changing the pricing model, they banned washing machines...
I wonder if we will get to "electricity" style pricing for AI. What makes electricity predictable is relatively constant average usage over time + price is manageable. I'm just not buying electrical house heating and manage my electricity spending within some bounds.
With AI the problem is that we are only now getting to useful AI, and for now it's still too expensive to be useful, so they subsidize until they can stabilize at "cheap enough and smart enough" level. But it feels like that's still 2 years away while they are stopping to subsidize now. Will be interesting.
No? It was flat, but with ambiguously stated limits (eg. 5x, 10x 20x). They were discriminating on how the "electricity" was used, but that's not that much different than how power companies have different rates for residential users vs industrial users.
I am not sure if you would call claude code "an auto loop", but you don't need to be running something crazy like gas town to spend a lot of tokens with Claude.
1) They're lying
2) Status signalling
[0]: https://www.wheresyoured.at/why-are-we-still-doing-this/
I soured on him when he could not calculate cumulative revenue on an exponential curve, ignored everyone who showed him how to calculate it, and then kept writing that Anthropic’s revenue numbers are fake based on his inability to do math.
It’s too bad because any heavily hyped industry needs good critics (think Ida Tarbell to Rockefeller) but they should be honest critics, and he’s not, which really undermines not only his but others’ criticism of the industry.
The truth is that the AI companies are gambling that inference cost will continue following a hyper version of Moore's Law, e.g. Google TurboQuant.
The countervailing thesis is that frontier models are consuming more and more compute.
The deepest truth: you often don't need a frontier model to get commercially acceptable results from AI. Thus, bring on the true pricing! and I'll just switch models to something financially sustainable.
A $20 subscription 2 years ago is not providing the same level of intelligence you're getting today.
Every major lab knows open source models are 6 months behind (See Google's "We have no moat") and none of them plan to make money on inference. Companies are subsidizing users to create moats that persist when models are essentially free for most everyday use.