Everyone is actually underestimating stickiness. The near billion users OpenAI has is actually a real moat and might translate into decent chunk of revenue.
My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else. There are no network effects for sure, but people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere. Understandable that it would be hard to get majority of these free users to pay for anything, and hence, advertising seems a good bet. You couldn't have thought of a more contextual way of plugging in a paid product.
I think OpenAI has better chance to winning on the consumer side than everyone else. Of course, would that much up against hundreds of billions of dollars in capex remains to be seen.
In theory you can export your data from ChatGPT under Settings > Data Controls. In practice, I tried this recently and the download link was broken. Convenient bug I must say.
> Everyone is actually underestimating stickiness.
I think you're underestimating how fickle consumers are, and how much their choices are based on fashion and emotion. A couple more of these, and OpenAI will find itself relegated to the kids' table with Grok and Perplexity. https://www.technologyreview.com/2025/08/15/1121900/gpt4o-gr...
> people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere.
I just asked it to build me a searchable indexed downloaded version of all my conversations. One shot, one html page, everything exported (json files).
I’m sure I could ask Claude to import it. I don’t see the moat.
So far I've not seen anyone complain that their conversations have gone missing. There's a GDPR-style export option that I've used a few times for my own.
OpenAI is already building complex user models. And I mean, super detailed user models - where you are from, what you do, what are your most vulnerable weaknesses, what you care about the most and everything else. This is information even the world's largest advertising company would struggle to put together across their fragmented eco-system (Gmail, Search, etc), but OpenAI has all this on a silver platter. And that scares me, because, a lot of people use ChatGPT as a therapist. We know this because of their advertising intent which they've explicitly expressed. Advertising requires good user models to work (so advertisers can efficiently target their audience) and it is the only way to prove ROI to the advertisers. "But, OpenAI said they won't do targeted ads..". Remember, Google said "Don't be evil" once upon a time too..
That's ok, we use ChatGPT only for coding. We should be good, right? Umm, no. They already explicitly expressed the intention to take a percentage of your revenue if you shipped something with ChatGPT, so even the tech guys aren't safe.
"As intelligence moves into scientific research, drug discovery, energy systems, and financial modeling, new economic models will emerge. Licensing, IP-based agreements, and outcome-based pricing will share in the value created. That is how the internet evolved. Intelligence will follow the same path."
So yes, OpenAI has the best chance to win on the consumer side than anyone else. But, that's not necessarily a good thing (and the OpenAI fanboys will hate me for pointing this out).
Anecdata point: I canceled my ChatGPT pro subscription last year over some shitty thing Altman did at OpenAI and easily moved over to Claude. The only thing I took with me was the system prompt or whatever it's called, I couldn't care less about my conversation history. I'm planning to do the same thing with my Claude subscription if Anthropic kowtows to the Pentagon. These services are not sticky at all IMO.
This is the real question. Is she willing to pay $20 per month when Google's Gemini is free? Google can remain irrational longer than OAI can remain solvent.
Google's profits have been going up while 'giving away gemini for free', so I don't think they're 'being irrational', they're unit economics apparently work.
I understand the underlying quote but not how/why it’s being used here. How is Google giving Gemini away for free to undercut OAI irrational? Anticompetitive, maybe.
Agree. And we don't even know if they're bleeding out doing it. Google is on more efficient hardware and they fully control their ecosystem. And that ecosystem can feed into and be fed by their other ecosystems. OAI just has LLMs.
I commute on the train, I see students studying with it. I go for brunch on the weekend, I see parents consulting it while at the table with their infants. I'm at work, colleagues are using it all day. I leave work and I overhear the random woman smoking in the alleyway talking on her cellphone saying "so I asked chatgpt". It's mind-bogglingly pervasive, the last time something had such a seizmic cultural impact like this was I dunno, Facebook? And secondly, it's all one specific brand. I'm not encountering co-pilot or gemini in the meat-space.
chatgpt is generic (as in, no prior meaning attached, except for the few people in the world who understand what GPT stands for). It's simple - even a non-english speaker can say it easily, and doesn't require one to be native to know how to pronounce it (this is a difficult concept for a native english speaker to grok).
Chatgpt is like "Jeep". My grandmother calls every suv a jeep. But they're not all jeeps. AI looks like chatgpt, but people are driving all sorts of different AIs.
I would guess OAI has no moat or stickiness beyond what governments and private companies will do to keep it afloat through equity and circular financing. Good enough AI is all most need, and they need it at the cheapest cost basis possible with the most convenient access.
Google will probably win on most of these fronts unless a coalition is formed to actively fight google at the business/government level. But, absent that, it will win out over oai and oai will probably bleed to death trying to become profitable.. whenever that happens. You'll likely see their talent and corresponding salaries shrink massively along this journey.
How many of those people are paying? I think many say “use ChatGPT” to mean any LLM. As you noted it seems you just see ChatGPT in the wild but that is anecdotal. It is certainly pervasive right now. But I know a lot of people currently switching to Gemini.
I personally prefer claude models for all my work. If I were them I would be very worried. They are never giving us AGI and I am skeptical they are worth .5 trillion. Their cash burn is insane. Once ads and price hikes come, people will migrate to companies that can still afford to subsidize (like Google).
Plus I heard they lowered projections recently? Sam honestly comes off as a grifter.
I'm very similar to the OP here, always hear about ChatGPT rarely anything else. Most people are definitely not paying, but of the few that are paying, outside of software developers, they are all paying for ChatGPT exclusively. I don't know of anyone paying for the basic chat versions of other AIs. A few developers paying for Claude and Gemini, but I know hundreds of people that talk of ChatGPT and no other AI, again most not paying though.
Outside of work I don't know anyone who pays for AI.
But I have noticed that everyone seems to be using ChatGPT as the generic term for AI. They will google something and then refer to the Gemini summary as "ChatGPT says...". I tried to find out what model/version one of my friends was using when he was talking about ChatGPT and it was "the free one that comes with Android"... So Gemini.
Gemini is nearly unusable thanks to “subsidies”. I honestly don’t see what the path is to these companies making any money short of massive price hikes, or electricity suddenly becoming free.
I actually encountered this today - one of a group I am planning a trip with posted some of the breathless nonsense that ChatGPT produced ("you're not picking a hotel, you're picking a group dynamic..." and other such textual diarrhea).
It turned out the only reason ChatGPT was because it is free for small enough volume usage. My suggestion to see what Claude had to say instead was met with "huh, you have to pay for it?". It's not like these are people that can't afford $20 per month for a subscription, but it might be that these assistants aren't even worth that for typical "normie" use cases.
Is it anecdotal? The observation isn't _my_ experience using it, or of _my friends_. I have no influence over who I see in public using it. I know it's not exactly a scientific study but it's still pretty damn good as a random sample. If I went outside and saw the sky was dark, cloudy and my face got wet, would you tell me it was anecdotal evidence when I say it's raining out?
Yup this is just another case of the HN bubble. I polled a bunch of non technical friends recently who I know use AI on a daily basis. Out of 10+ maybe 2 had ever heard of Claude, and no one had any interest in trying it.
ChapGPT has become the AI verb, and in the consumer space it is not getting dethroned.
1) the opportunities for vertical integration are huge. Anthropic originally said they didn’t want to build IDEs, then realized the pivot to Claude Code was available to them. Likewise when one of these companies can gobble up Legal, Medical, etc why would they let companies like Harvey capture the margins?
2) oss models are 6-12 months behind the frontier because of distillation. If labs close their models the gap will widen. Once vertical integration kicks off, the distillation cost becomes higher, and the benefit of opening up generic APIs becomes lower.
I can imagine worlds where things don’t turn out this way, but I think folks are generally underrating the possibilities here.
The question is always about performance plateau. If LLM performance plateaus, then OSS models will catch up. If there isn’t a plateau, then I can simply ask the super intelligent AI to distill itself, or tell me how to build a clone.
It’s ironic, if the promise of AGI were realized, all knowledge companies, including AI companies, become worthless
To go vertical they’d need to illustrate the value-add, a problem that the vertical competitors already have. Why use Claude for Accountants at $300/month when regular Claude will do the same thing for much less? The stock answer is that Claude for Accountants keeps your data more secure and doesn’t train on it. But a) I think the enterprise consumer is much less likely to trust a model creator not to stick its hand in the cookie jar than a middleman who needs the trust to survive, and b) the vertical competitors typically don’t use the absolute most up-to-date models in their products anyway, so why not just go open-source and run everything in-house? 6 months is a long time in tech, but it’s the blink of an eye in most white-collar professions.
I speak native English and barebones high school Spanish. I recently visited Costa Rica and almost every time there was a language barrier issue (unknown word or phrase), the local folks opened ChatGPT, said what they were trying to say in Spanish and then had ChatGPT convert it to English. It was everywhere.
When OpenAI starts requiring a payment, or showing an ad before it starts translating, will they continue? Or will they use the Google Translate app, which can do this locally? (Or for that matter Gemini or Grok or whatever?)
I have done that at my home. My wife calls maids. They are there. I need to go to restroom. Ask my wife. She is struggling to communicate. It took me 3 seconds to realize ChatGPT could help. And it did.
Nice that ChatGPT does that, its also true that Google Translate and other APPs have had this functionality for a decade or more. I was getting live German translated on my phone in 2015 with no problems.
These sorts of doom articles are interesting in that they are from the perspective of tech company valuations. Why is this the important perspective?
For the humanity perspective, this doom is very optimistic. It says that these LLMs currently disrupting the platforms cannot themselves be the next platforms.
Maybe no one will have 'the ability to make people do something that they don't want to do' sort of power with this next stage in computing.
These very valid points apply to all companies trying to make money off of proprietary models, which means margins are going to collapse in a vicious price war that will make Uber vs Lyft seem tame.
As margins collapse capex will collapse. Unfortunately valuations have become so tied to AI hype any reduction in capex will signal maybe the hype has gotten ahead of itself, meaning valuations have gotten ahead of themselves. So capex keeps escalating.
None of this takes into account the hoarding effects at play with regards to GPU acquisition. It's really a dangerous situation the industry is caught in.
Companies use to hoard talent. Now they are hoarding compute, RAM, and GPUs.
Deepseek showed that there are possibly less expensive ways to train, meaning the future eye watering expenses may not happen.
Bigger models may not scale. The future may be federations of smaller expert models. Chat GPTX doesn’t need to know everything about mental health, it just needs to recognize the the Sigmund von Shrink mental health model needs to answer some of my questions.
OpenAI lost the race to nerds' hearts. In the latest benchmarks, OpenAI is simultaneously cheaper (like 50% less?) and scores hire in coding and tool use benchmarks (GPT-5.3-Codex trounces Opus 4.6), yet all the coders want to marry Anthropic. I don't think OpenAI understands how to sell, if they even had a product to sell.
> The models have a very large user base, but very narrow engagement and stickiness, and no network effect or any other winner-takes-all effect so far that provides a clear path to turning that user base into something broader and durable.
I think this is clearly wrong. Users provide lots of data useful for making the models better and that is already being leveraged today. It seems like network effects are likely in the future too. And they have several ways to get stickiness including memory.
I keep hearing about how the app integrations will be where the AI value is and then I see the actual app integrations and they are between useless and mildly helpful.
From what I can see Anthropic's big bet is that they will solve computer use and be able to act as an autonomous agent. Not so sure how fast they will progress on that. OpenAI on the other hand - I have no idea what they are planning - all I'm reading is AI porn and ads.
Google seems to be lackluster at executing with Gemini but they are in the best position to win this whole thing - they have so much data (index of the web, youtube, maps) and so many ways to capitalize on the models - it's honestly shocking how bad they are at creating/monetizing AI products.
Google is doing a much better job integrating AI into existing products. Gemini CLI and such seem just like a way to keep the leading competitors humble (a la iOS vs android). They're also building AI tooling tailored to specific companies (like the Goldman thing just announced) and have the cloud infra to back it up. I really only see Anthropic and Google surviving in 10 years.
People underestimate the lead OAI has with their post-5.2 models. The author does not strike me as someone who closely follows the progress frontier labs make in US and around the world.
It's a joint ignorance of how these frontier models get baked and what consumers want.
Many pundits think it's just a matter of scraping the internet and having a few ML scientists run ablation experiments to tune hyperparameters. That hasn't been true for over a year. The current requirements are more org-scale, more payoff from scale, more moat. The main legitimate competitive threat is adversarial distillation.
Many pundits also think that consumers don't want to pay a premium for small differences on the margin. That is very wrong-headed. I pay $200/month to a frontier lab because, even though it's only a few % higher in benchmark scores, it is 5x more useful on the margin.
Agreed, compare the frontier models from Google and OAI. It’s like night and day. Anyone who says “the tech has caught up” has not spent even one day using Gemini 3.1 to try and accomplish something complicated.
If you were forced to choose just one of all the competing players, which is "the one" you will use?
For me, the choice is ChatGPT, not for its Codex or other fancy tooling - just the chat. Not that Claude Code or Cowork is less important. Not that I like Codex over Claude Code.
Right now? Claude, so long as they don't fold to the Pentagon's demands. It's important to me that the company at least have a pretense of ethics. If they fold, I may just use open models via DDG – I don't find code assistants very useful for my workflow anyway.
Anthropic are making a very convincing play for business and "enterprise" customers - first with Claude Code and now with Cowork and especially Claude for Excel. The revenue growth they've announced has been extremely impressive over the past year.
Sometimes I like to imagine what this would be like if the technology had appeared 25 years ago.
First off, nonetheless open publishing stuff. Everything would have been trade secrets.
Next off no interoperable json apis instead binary APIs that are hard to integrate with and therefore sticky. Once you spent 3 or 4 months getting your MCP server setup, no way would you ever try to change to a different vendor!
The number of investors was much smaller so odds are you wouldn't have seen these crazy high salaries and you wouldn't have people running off to different companies left and right. (I know, .com boom, but the .com boom never saw 500k cash salaries...)
Imagine if Google hadn't published any papers about transformers or the attention paper had been an internal memo or heck just word2vec was only an internal library.
It has all been a net good for technological progress but not that good for the companies involved.
Could they have even trained the models 25 years ago? Wikipedia was nothing close to what it is today and I know folks here like to mourn the fall of the open web, but it's still orders of magnitude larger today than it was in 2001. YouTube, so many information stores that simply didn't exist then.
Maybe not 25,but IBM Watson beat humans at Jeopardy over 10 years ago. The technology has been there, the difference is the willingness to burn money on it in hopes of capturing exponential revenue from disrupting industries.
Obviously the costs have come down but if IBM felt like burning 100 Billion in 2012 I'm pretty sure they could have a similarly impressive chat bot. Just not sure how they would have ever recouped the revenue.
The book archives are a big one as well, all the journals that have been published digitally throughout the 2000s, and all the newspapers.
Though with some types of models (specifically voice) it has been discovered that a smaller high quality dataset is better than a giant dataset filled with errors.
The WH has said it hasn't approved any sales, but it's not clear China is buying, and it seem they are making good progress on their huawei ascend chips. If China is basiclly at parity on the full stack (silicon, framework, training, model), and it starts open weighting frontier models at $0.xx/M tokens, then yeah, moat issues all around one would imagine? Not surprised to see Anthropic complaining like this: https://www.anthropic.com/news/detecting-and-preventing-dist... - but I don't know how you go back from it at this point?
Not surprising, Nvidia's margin was just a huge incentive for companies/countries to develop their own solutions. You don't have to be 100% as good if you're 80% cheaper. It's unsurprising that this is being driven by Chinese companies/labs who often have a lot less funding than the US, and the big tech companies (Google, Microsoft, Amazon) who will benefit the most from having their own compute.
I've never believed in Nvidia's moat, and it seems OpenAI's moat (research) has gone and surprisingly is no longer a priority for them.
It seems like it’s really only China that’s pursuing the route of doing more with smaller/cheaper models, too, which also has a lot of potential to give the whole bubble a good shake.
To me it seems like the most obvious thing to do. More efficient models both make up for whatever you lost by using cheaper hardware and let you do more with the hardware you have than the competition can. By comparison the ever-growing-model strategy is a dead end.
Feels a bit crazy saying this but I can imagine a weird future where we have some outlawed Chinese tokens situation under some national security guise. No clue how that would work but nothing surprises me anymore.
it seem they are making good progress on their huawei ascend chips
This is interesting to me. I thought that the reason for deepseek delay was because of the insistence ( by the politicians) to use huawei chip[0]. But that was last year August.
And evdn this information might be not very reliable because both US and China government wouldnt be happy about fact that some models might happen to be trained on some "shadow datacenter" full of Nvidia GPUs.
Tech companies are one of the jewels in America's (USA's) crown. If we build a bunch of huge AI companies, rivals will probably continue to release open AI models which undermine the US's influence in the world.
This article is significantly better written than most anti-OpenAI/AI articles, and for that I am really grateful. I am generally an AI booster (lol), so I am happy to read well-considered thought pieces from people who disagree with me.
That being said...
> The one place where OpenAI does have a clear lead today is in the user base: it has 8-900m users. The trouble is, there’re only ‘weekly active’ users: the vast majority even of people who already know what this is and know how to use it have not made it a daily habit. Only 5% of ChatGPT users are paying, and even US teens are much more likely to use this a few times a week or less than they are to use it multiple time a day.
This really props up the whole argument, because the author goes on to say that OpenAI's users are not really engaged. But is "only" 5% of users paying of a 8-900M user base really so inconsequential? What percentage of Meta's users are paying? Google's? I would be curious to see the author dig deeper here, because I am skeptical that this is really as bad as the author suggests.
Moving on to another section:
> If the next step is those new experiences, who does that, and why would it be OpenAI? The entire tech industry is trying to invent the second step of generative AI experiences - how can you plan for it to be you? How do you compete with this chart - with every entrepreneur in Silicon Valley?
Er, are any of these startups training foundation models? No? Then maybe that is how you compete? I suppose the author would say that the foundation model isn't doing much for OpenAI's engagement metrics (and therefore revenue), but I am not sure I agree there.
Still, really good article. I think it really crystalizes the anti-OpenAI argument and it gives me a lot of interesting things to think about.
> What percentage of Meta's users are paying? Google's?
The advertiser based business model for those companies makes your question/thought process here problematic for me. Historically speaking Google and "Meta" (Facebook) were primarily advertising provider companies. They provided billboards (space and time on the web page in front of an end-user) to people who were willing to buy tht space and time on the billboard. The "free access" end-users would always end up seeing said billboards, which is how they ended up "paying" for the service.
So most of Meta/Google end-users were "paying" users. They were being subsidised by the advertising customers paying for the end-users (who were forced to view adverts). The end-users paid with interruption to the service by an advert. [0]
In that context it feels a little like you're comparing apples to dave's left foot, as OpenAI hasn't had that with advertising ............ historically [1].
--
[0]: yes ad-blockers, yes more diverse revenue income streams over the years like with phones, yes this is simplified yadayada
[1]: excluding government etc. ~bailouts~ investments as not the same as advertising subsidies, but you could argue it's doing the same thing
Yes -- but both Google and Meta didn't start off as an advertising company - they started off providing a service a lot of people liked, and then eventually added ads to it. My assumption (somewhat implicit, admittedly) is that there's no reason OpenAI couldn't do the same. I can understand why that might be controversial, though.
But honestly, if OpenAI can't figure out ads given all their data and ability, they deserve to fail. :P
But OpenAI has more serious competition than those others did when they were coming up. That puts pressure on them to figure out ads and they dragged their feet getting started
> But is "only" 5% of users paying of a 8-900M user base really so inconsequential? What percentage of Meta's users are paying? Google's? I would be curious to see the author dig deeper here, because I am skeptical that this is really as bad as the author suggests.
The difference is in the unit economics. OpenAI has to spend massively per free user it serves. The others you mentioned have SaaS economics where the marginal cost of onboarding and serving each non-paying user is essentially zero while also gaining money from these free users via advertising. Hence, the free users are actually a net positive rather than an endless money sink.
Keep also in mind that AI has always been, and will always be, a commodity. The moment you start forcing people to convert into paying customers is the moment they jump ship at scale.
This is confirmation bias. HN and other tech people are focusing on the programming aspect of AI more than anything else. The average user does not use it for that, and they don't care. ChatGPT became something like Kleenex.
Kleenex was exactly what I had in mind when reading other comments. And just like Kleenex, where people use whatever tissue they find and forget the word "tissue" even exists, ChatGPT seems to be becoming a genericized term that just means "AI chatbot."
Worth noting that it’s not a winner-takes all situation. There’s definitely space for differentiation.
Anthropic is in favor with developers and generally tech people, while OpenAi / Gemini are more commonly used by regular folks. And Grok, well, you know…
We have yet to see who’s winning in the “creative space”, probably OpenAI.
As these positionings cristallize, each company is likely going to double down on their user’s communities, like Apple did when specifically targeting creative/artsy people, instead of cranking general models that aren’t significantly better at anything.
The main problem with OpenAI/Anthropic is that their only moat is their models, and it has been proven that you can clone a model through distillation. Although the performance is not exactly the same, it gets very close to the original.
My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else. There are no network effects for sure, but people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere. Understandable that it would be hard to get majority of these free users to pay for anything, and hence, advertising seems a good bet. You couldn't have thought of a more contextual way of plugging in a paid product.
I think OpenAI has better chance to winning on the consumer side than everyone else. Of course, would that much up against hundreds of billions of dollars in capex remains to be seen.
Sure it's 'sticky' at least a little, but it's not a moat. A moat is a show stopper like they own you.
I think you're underestimating how fickle consumers are, and how much their choices are based on fashion and emotion. A couple more of these, and OpenAI will find itself relegated to the kids' table with Grok and Perplexity. https://www.technologyreview.com/2025/08/15/1121900/gpt4o-gr...
Ads might change that. If we know anything, nobody beats Google with ad based monetization. OAI is absolutely correct to be scared.
I just asked it to build me a searchable indexed downloaded version of all my conversations. One shot, one html page, everything exported (json files).
I’m sure I could ask Claude to import it. I don’t see the moat.
Honest question I have this issue a lot with AI claims. Nobody verifies the output.
it's not useless, although it used to be more useful than it is now.
That's ok, we use ChatGPT only for coding. We should be good, right? Umm, no. They already explicitly expressed the intention to take a percentage of your revenue if you shipped something with ChatGPT, so even the tech guys aren't safe.
"As intelligence moves into scientific research, drug discovery, energy systems, and financial modeling, new economic models will emerge. Licensing, IP-based agreements, and outcome-based pricing will share in the value created. That is how the internet evolved. Intelligence will follow the same path."
"Intelligence will follow the same path."
https://openai.com/index/a-business-that-scales-with-the-val...
So yes, OpenAI has the best chance to win on the consumer side than anyone else. But, that's not necessarily a good thing (and the OpenAI fanboys will hate me for pointing this out).
I wonder what percentage of its users know what the GPT stands for, or even thought about it for a second?
chatgpt is generic (as in, no prior meaning attached, except for the few people in the world who understand what GPT stands for). It's simple - even a non-english speaker can say it easily, and doesn't require one to be native to know how to pronounce it (this is a difficult concept for a native english speaker to grok).
These features makes for a good name.
I would guess OAI has no moat or stickiness beyond what governments and private companies will do to keep it afloat through equity and circular financing. Good enough AI is all most need, and they need it at the cheapest cost basis possible with the most convenient access.
Google will probably win on most of these fronts unless a coalition is formed to actively fight google at the business/government level. But, absent that, it will win out over oai and oai will probably bleed to death trying to become profitable.. whenever that happens. You'll likely see their talent and corresponding salaries shrink massively along this journey.
I personally prefer claude models for all my work. If I were them I would be very worried. They are never giving us AGI and I am skeptical they are worth .5 trillion. Their cash burn is insane. Once ads and price hikes come, people will migrate to companies that can still afford to subsidize (like Google).
Plus I heard they lowered projections recently? Sam honestly comes off as a grifter.
But I have noticed that everyone seems to be using ChatGPT as the generic term for AI. They will google something and then refer to the Gemini summary as "ChatGPT says...". I tried to find out what model/version one of my friends was using when he was talking about ChatGPT and it was "the free one that comes with Android"... So Gemini.
It turned out the only reason ChatGPT was because it is free for small enough volume usage. My suggestion to see what Claude had to say instead was met with "huh, you have to pay for it?". It's not like these are people that can't afford $20 per month for a subscription, but it might be that these assistants aren't even worth that for typical "normie" use cases.
ChapGPT has become the AI verb, and in the consumer space it is not getting dethroned.
I would love to dunk on this or something, but the lesson is that it's all about distribution.
Sama is really good at that, and also.. gotta give props for a lot of forward thinking like the orb, which now makes a lot of sense to me.
1) the opportunities for vertical integration are huge. Anthropic originally said they didn’t want to build IDEs, then realized the pivot to Claude Code was available to them. Likewise when one of these companies can gobble up Legal, Medical, etc why would they let companies like Harvey capture the margins?
2) oss models are 6-12 months behind the frontier because of distillation. If labs close their models the gap will widen. Once vertical integration kicks off, the distillation cost becomes higher, and the benefit of opening up generic APIs becomes lower.
I can imagine worlds where things don’t turn out this way, but I think folks are generally underrating the possibilities here.
It’s ironic, if the promise of AGI were realized, all knowledge companies, including AI companies, become worthless
i'm just so surprised they'd use chatgpt to do this, when it's quite as easily (and perhaps faster) to use google translate.
Everyone, it turns out. Same with Google. Same with YouTube. Same with Instagram, and the rest of the web.
Once people become dependent on ChatGPT (as they already are) watching a 30 second ad in the middle of a session will become second nature.
For the humanity perspective, this doom is very optimistic. It says that these LLMs currently disrupting the platforms cannot themselves be the next platforms.
Maybe no one will have 'the ability to make people do something that they don't want to do' sort of power with this next stage in computing.
Sounds good to me.
As margins collapse capex will collapse. Unfortunately valuations have become so tied to AI hype any reduction in capex will signal maybe the hype has gotten ahead of itself, meaning valuations have gotten ahead of themselves. So capex keeps escalating.
None of this takes into account the hoarding effects at play with regards to GPU acquisition. It's really a dangerous situation the industry is caught in.
Companies use to hoard talent. Now they are hoarding compute, RAM, and GPUs.
Deepseek showed that there are possibly less expensive ways to train, meaning the future eye watering expenses may not happen.
Bigger models may not scale. The future may be federations of smaller expert models. Chat GPTX doesn’t need to know everything about mental health, it just needs to recognize the the Sigmund von Shrink mental health model needs to answer some of my questions.
I think this is clearly wrong. Users provide lots of data useful for making the models better and that is already being leveraged today. It seems like network effects are likely in the future too. And they have several ways to get stickiness including memory.
From what I can see Anthropic's big bet is that they will solve computer use and be able to act as an autonomous agent. Not so sure how fast they will progress on that. OpenAI on the other hand - I have no idea what they are planning - all I'm reading is AI porn and ads.
Google seems to be lackluster at executing with Gemini but they are in the best position to win this whole thing - they have so much data (index of the web, youtube, maps) and so many ways to capitalize on the models - it's honestly shocking how bad they are at creating/monetizing AI products.
Many pundits think it's just a matter of scraping the internet and having a few ML scientists run ablation experiments to tune hyperparameters. That hasn't been true for over a year. The current requirements are more org-scale, more payoff from scale, more moat. The main legitimate competitive threat is adversarial distillation.
Many pundits also think that consumers don't want to pay a premium for small differences on the margin. That is very wrong-headed. I pay $200/month to a frontier lab because, even though it's only a few % higher in benchmark scores, it is 5x more useful on the margin.
For me, the choice is ChatGPT, not for its Codex or other fancy tooling - just the chat. Not that Claude Code or Cowork is less important. Not that I like Codex over Claude Code.
Personally I only see Google (Gemini), X (Grok) and the Chinese models having a chances to still be alive in 1-2 years.
First off, nonetheless open publishing stuff. Everything would have been trade secrets.
Next off no interoperable json apis instead binary APIs that are hard to integrate with and therefore sticky. Once you spent 3 or 4 months getting your MCP server setup, no way would you ever try to change to a different vendor!
The number of investors was much smaller so odds are you wouldn't have seen these crazy high salaries and you wouldn't have people running off to different companies left and right. (I know, .com boom, but the .com boom never saw 500k cash salaries...)
Imagine if Google hadn't published any papers about transformers or the attention paper had been an internal memo or heck just word2vec was only an internal library.
It has all been a net good for technological progress but not that good for the companies involved.
Obviously the costs have come down but if IBM felt like burning 100 Billion in 2012 I'm pretty sure they could have a similarly impressive chat bot. Just not sure how they would have ever recouped the revenue.
Though with some types of models (specifically voice) it has been discovered that a smaller high quality dataset is better than a giant dataset filled with errors.
The WH has said it hasn't approved any sales, but it's not clear China is buying, and it seem they are making good progress on their huawei ascend chips. If China is basiclly at parity on the full stack (silicon, framework, training, model), and it starts open weighting frontier models at $0.xx/M tokens, then yeah, moat issues all around one would imagine? Not surprised to see Anthropic complaining like this: https://www.anthropic.com/news/detecting-and-preventing-dist... - but I don't know how you go back from it at this point?
I've never believed in Nvidia's moat, and it seems OpenAI's moat (research) has gone and surprisingly is no longer a priority for them.
To me it seems like the most obvious thing to do. More efficient models both make up for whatever you lost by using cheaper hardware and let you do more with the hardware you have than the competition can. By comparison the ever-growing-model strategy is a dead end.
Anything changes in between?
[0]: https://www.reuters.com/world/china/deepseeks-launch-new-ai-...
(^edit, I don't know for certain entirely is accurate - edit again, found a chinese source saying their image model is end to end ascend, or at least, domestic: https://zhuanlan.zhihu.com/p/1994775762516080044 & https://www.guancha.cn/economy/2026_02_12_806895.shtml)
They've already found a better route. Buy it elsewhere e.g. in Singapore. Train their models there using Nvidia hardware.
Ship the result and fine tune back in China.
So "China" is and has always been buying it. No difference. The politics can keep raging.
That being said...
> The one place where OpenAI does have a clear lead today is in the user base: it has 8-900m users. The trouble is, there’re only ‘weekly active’ users: the vast majority even of people who already know what this is and know how to use it have not made it a daily habit. Only 5% of ChatGPT users are paying, and even US teens are much more likely to use this a few times a week or less than they are to use it multiple time a day.
This really props up the whole argument, because the author goes on to say that OpenAI's users are not really engaged. But is "only" 5% of users paying of a 8-900M user base really so inconsequential? What percentage of Meta's users are paying? Google's? I would be curious to see the author dig deeper here, because I am skeptical that this is really as bad as the author suggests.
Moving on to another section:
> If the next step is those new experiences, who does that, and why would it be OpenAI? The entire tech industry is trying to invent the second step of generative AI experiences - how can you plan for it to be you? How do you compete with this chart - with every entrepreneur in Silicon Valley?
Er, are any of these startups training foundation models? No? Then maybe that is how you compete? I suppose the author would say that the foundation model isn't doing much for OpenAI's engagement metrics (and therefore revenue), but I am not sure I agree there.
Still, really good article. I think it really crystalizes the anti-OpenAI argument and it gives me a lot of interesting things to think about.
The advertiser based business model for those companies makes your question/thought process here problematic for me. Historically speaking Google and "Meta" (Facebook) were primarily advertising provider companies. They provided billboards (space and time on the web page in front of an end-user) to people who were willing to buy tht space and time on the billboard. The "free access" end-users would always end up seeing said billboards, which is how they ended up "paying" for the service.
So most of Meta/Google end-users were "paying" users. They were being subsidised by the advertising customers paying for the end-users (who were forced to view adverts). The end-users paid with interruption to the service by an advert. [0]
In that context it feels a little like you're comparing apples to dave's left foot, as OpenAI hasn't had that with advertising ............ historically [1].
--
[0]: yes ad-blockers, yes more diverse revenue income streams over the years like with phones, yes this is simplified yadayada
[1]: excluding government etc. ~bailouts~ investments as not the same as advertising subsidies, but you could argue it's doing the same thing
But honestly, if OpenAI can't figure out ads given all their data and ability, they deserve to fail. :P
The difference is in the unit economics. OpenAI has to spend massively per free user it serves. The others you mentioned have SaaS economics where the marginal cost of onboarding and serving each non-paying user is essentially zero while also gaining money from these free users via advertising. Hence, the free users are actually a net positive rather than an endless money sink.
Keep also in mind that AI has always been, and will always be, a commodity. The moment you start forcing people to convert into paying customers is the moment they jump ship at scale.
Just something to keep in mind.
Anthropic is in favor with developers and generally tech people, while OpenAi / Gemini are more commonly used by regular folks. And Grok, well, you know…
We have yet to see who’s winning in the “creative space”, probably OpenAI.
As these positionings cristallize, each company is likely going to double down on their user’s communities, like Apple did when specifically targeting creative/artsy people, instead of cranking general models that aren’t significantly better at anything.
Claude: Programmers
ChatGPT: LGBTQ/Liberals, with a lot of censorship
Grok: Joe Rogan