If you think you need to spend $100B, does using a third-party cloud provider still make sense? It doesn’t matter what sweet deal Amazon is pitching—in that scenario, you’d want to own your stack. Especially in a hyper-competitive field like this, where margins are going to matter a lot soon.
It feels like these hyperscalers are just raising as much as they can giving extremely rosy projections becauses these sooner or later peak is going to be reached (if that hasn’t happened already)
The problem is that at that scale, the alternative is building your own data centers. You'd probably want at least 2 in the US, 2 in Europe, 2 in Asia, maybe 1 in Africa and 1 in LATAM. So 8-10, and you need at least half of them ready "on time."
What does "on time" mean? You'll need to negotiate with local authorities, some friendly, some not. Data centers aren't exactly popular neighbors these days. Then negotiate with the local power utility. Fingers crossed the political landscape doesn't shift and your CEO doesn't sign a contract with an army using your product to pick bombing targets, because you'll watch those permits evaporate fast.
Then there's sourcing: CPUs, GPUs, memory, networking. You need all of it. Did you know the lead time for an industrial power transformer is 5+ years? Don't get me started on the water treatment pumps and filters you can't even get permitted without. What will you do in the meantime ? You surely aren't gonna get preferential treatment from AWS / Google / ... if they know you are moving away anyway. Your competition will.
The risk and complexity are just too big. AI/LLM is already an incredibly complex and brittle environment with huge competition. Getting distracted building data centers isn't enticing for these companies, it's a death sentence.
For AI inference you don't need to geographically distribute your data centers. Latency, throughput, and routes don't matter here. When it's 10 seconds for the first token and then a 1KB/sec streamed response, whatever is fine. You can serve Australia from the US and it'll barely matter. You can find a spot far outside populated areas with cheap power, available water, and friendly leadership, then put all of your data centers there. If you're worried about major disasters, you can pick a second city. You definitely don't need a data center in every continent.
You're not wrong about the rest but no AI company would ever build a data center in every continent for this, even if they were prepared to build data centers. AI inference isn't like general purpose hosting.
They want it, sure. Customers want everything. In this thought experiment, you're Anthropic, not the customer. You're making a choice that's best for Anthropic. Will Anthropic lose customers because the latency is higher? No way. Customers want low cost and lots of usage more than they want low latency. In a cutthroat race to the bottom, there's no room to "give away" massively expensive freebies like a data center near every population center when the customer doesn't value those extras with actual money. It's the same reason we all tolerate the relatively slow batched token rate--the batching dramatically lowers the cost, and we need low cost inference more than we want low latency. If the cost goes up we'll actually leave, for real.
Large data centers consume as much power as a small city. The location decision is about being able to connect to a power grid that is ready to supply that.
Evaporative cooling also needs steady water supply. There are data centers which don’t operate on evaporative cooling but it’s more equipment intensive and expensive.
Latency doesn’t matter. You can get fast enough internet connected to these sites much more easily than finding power.
* not every task is waiting on the inference. lowering latency on other, serial tasks, can still have a noticable effect. Login, mcp queries, etc.
* data transit across the world can be very slow when there's network issues (a fiber is cut somewhere, congestion, bgp does it's thing, etc). having something more local can mitigate this
* several countries right now have demented leaders with idiotic cult-like followers. Best not to put all your eggs in those baskets.
* wars, earthquakes, fires, floods, and severe weather rarely affect the whole planet at once, but can have rippling effects across a continent.
And frankly, the real question isn't "why spread out the DCs?", its "what reason is there to put them close to each other?".
That’s PR hype. They built it quickly, but they didn’t go from deciding they wanted a data center to having it running in weeks.
You can’t even get the hardware at that scale without months or years of order lead time. NVidia doesn’t have warehouses full of compute hardware waiting for someone to come get it.
They also reused an existing building. Basically, they put 100,000 GPUs into a building and attached the necessary infrastructure in about half a year. Impressive, but it’s not the same as a $10B/year data center usage commitment like this deal.
And they used illegal power to do it (which will now give local poor people health disorders at 4x the national average). They likely violated every law possible in the process, like OSHA standards, overtime. Musk loves to overwork people.
They also reused an existing building that happened to be in the right place at the right time. The larger data center buildouts would almost always need new, dedicated construction.
I think these pledges offload some of the risk onto Amazon/Oracle/etc
If Anthropic/OpenAI miss projections, infra providers can somewhat likely still turn around and sell it to the next guy or use it themselves. If they have more demand than expected (as Anthropic currently does), vcs will throw money at them and they can outbid the competition
If they built it themselves and missed projections it's a much more expensive mistake
It's just risk sharing. Infra providers take some of the risk and some of the upside
> If they built it themselves and missed projections it's a much more expensive mistake
Not if their pricing comes with multiyear commitments for reserved pricing. No doubt they get a huge volume discount but the advertised AWS reserved pricing is already enough for pay for a whole 8x HX00 pod plus the NVIDIA enterprise license plus the staff to manage it after only a one year commitment. On-demand pricing is significantly more expensive so they’re going to be boxed in by errors in capacity planning anyway (as has been happening the last few months).
The economics here are absurd unless you’re involved in a giant circular investment scheme to pump up valuations.
The pricing models that are published on AWS' website almost certainly have almost nothing to do with the pricing models that are discussed behind closed doors for a $100 billion commitment.
Of course not, but unless they’re getting the sweet heart deal of a lifetime from Amazon of all places, it’s still a hogwash. We’re talking about enough capital to build their own fab and a dozen datacenters*. This deal isn’t going to be buying existing capacity because that’s already stretched, it will be paying for new buildouts.
Afterwards Amazon will be milking the machines these commitments buy for nearly a decade. That tradeoff makes sense at a small scale (even up to $X00 million or even billions), but at $Y0 or $Z00 billion?
Color me skeptical. There are plenty of other side benefits like upgrading to the newest GPUs every few years, but again we’re talking about paying for new buildouts with upfront commitments anyway.
* obviously the timelines, scientific risk, and opportunity cost make this completely infeasible but that’s the scale we’re talking about. It’s a major industrial project on the scale of the thirty year space shuttle program (~$200 billion).
I remember seeing this extremely shocking graph of top AI companies on Facebook or somewhere on how the money just keeps changing hands between a handful of companies. Almost seemed like a scam.
Money doesn’t just flow around with nothing exchanged. The money is in payment for goods and services.
It’s common even for smaller companies to do mutually beneficial business with each other. It’s actually helpful to do business with people who are also your customers because you have a relationship with them and you also have leverage: They are extra incentivized to treat you well because they don’t want to upset any of the other business you have with them.
In a rationale business yes, but when everything is basically some form of growth signal to investors to extract even more money from them before the music stops it doesn’t matter.
No. I am guessing that this is only a commitment and they will waver on committing.
However there are certain advantages like supply chain that only established companies would have access to. This is also a commitment to spend upto 100B on internal approach and research. I would expect them to come up with their own cpu chip and device design. This will shift the focus to an internal approach. And might make amazon give better prices later down the line
Classic time value of money situation. They get access to the HW now so they can continue to grow the business. Of course, if you think AI is just pets.com redux, I can see how you'd think it's already peaked. All those years of very important people insisting Bezos couldn't just pull a switch on reinvesting all the revenue into growing Amazon and then he did exactly that comes to mind.
From my understanding, if you want to use native Claude in AWS Bedrock, it runs from an AWS datacenter. I'm guessing that's why regardless of running your own stack... they still need a footprint in all the major clouds.
Look at GPU and RAM prices and data center rollout. We have quickly reached Earth's capacity for compute - it is a lot like the housing market. Once there is global saturation, the price to buy becomes increasingly high EVERYWHERE. Let's also not forget that Anthropic moves the market with their purchases and usage. They might literally be unable to buy capacity they need (or project to) and are doing this deal to pave a roadmap for the near-term and to keep global prices (somewhat) down.
> We have quickly reached Earth's capacity for compute
Why this versus us being in a temporary bottleneck? Like, railroads became expensive to build everywhere in the 19th century not because we reached Earth's capacity for railroads or whatever, but because we were still tooling up the industry needed to produce them at higher scales.
I imagine it comes down to if they want to buy hardware every generation, that gets very expensive and depreciates quickly. You've then got a whole load of assets on your books that are technically obsolete for the bleeding edge. This way, AWS buys and maintains the hardware and OpenAI doesn't need to claim it as depreciation ?
Here’s the answer to your queation (from the article)
> The Anthropic deal specifically covers Trainium2 through Trainium4 chips, even though Trainium4 chips are not currently available. The latest chip, Trainium3, was released in December. On top of that, Anthropic has secured the option to buy capacity on future Amazon chips as they become available.
Only Google and xAI build their own, no? I don't think it's that easy to vertically integrate massive datacenters into a software company. Both Google and xAI (Tesla, SpaceX) have a massive wealth of experience when it comes to building factories.
New level of glazing Elon Musk unlocked. xAI has a vertical integration advantage because Tesla once moved into an old Toyota factory and because once they paid Panasonic to put a Tesla sign outside a Panasonic battery factory. Incredible content.
That is a project you can work on at any point in the future and the more you delay it the more certain your investment will be about what you really need. But those additions to the PnL are capped to the costs.
In the meantime if you work on revenue generating work, that side of PnL is uncapped. So you can either put some engineers on reducing your costs at most by 100% or, if they worked on product ideas they could be working on things that generate over 9000% more revenue.
I think it could make sense to not want to own the stack if you think it's going to cost you velocity/focus? Which is probably the play here. But I'm not certain at all.
I watched some explain how deepseak got good and the Chinese approach to LLM training. Really wish I could remember it. The premise was China thinks of LLMs not as a thing separate from hardware, but gains efficiencies at each layer of the stack. From Chips to software, it's all integrated and purpose built for training.
Wonder if Anthropic is making a mistake by focusing on "consumer" hardware, and not going super specialized.
So you watched some random video from some random YouTuber, didn't even remember who made it, so much so you didn't even remember that deepseek isn't spelled "deapseak", didn't bother to even find it or verify, and then you go asserting your memory as fact on a serious discussion forum.
Comments like yours add nothing to the discussion.
> I watched some explain how deepseak got good and the Chinese approach to LLM training.
I distinctly remember reading a big pantie twisting from Sam Altman and Co that Chinese took their stuff, the stuff OpenAI and Co spent billions to create, and used that as the base for $0.00
It’s fake news predicated on China not being able to get GPUs. But it turns out everyone was getting them their GPUs by serial number swaps in warehouse.
I give it one to two more years before open source models have fully caught up. Products are commodities and models are commodities too. GPUs cores are still hard to get for inference at scale right now. They need a platform with lock in but unsure what that would look like and why it wouldn't be based on open source models.
What does "fully caught up" mean in the context of an ever evolving technology?
I think I'm in support of open weight models (though there are safety implications), but these things aren't cheap to train and run. This fact alone gives no incentive for leading labs to release cutting edge open weight models. Why spend the money then give the product for free?
Now if "fully caught up" means today's level of intelligence is available for free in two years, by then that level of intelligence means very little
Yeah I don't understand it, it's a marathon with three companies perpetually a minute ahead, and people keep saying "I expect the stragglers to catch up".
The only thing I can see them meaning is what you said, "in a minute the stragglers will be where the leaders were a minute ago", which, yeah, sure.
I think this "Mythos" situation, whether real or hype, points to the endgame here. Eventually, when you have a model powerful enough to have big consequences in the world, you stop worrying about selling it to consumers and start either a) using it to rule the world or b) watch as it gets nationalized. If you have a machine powerful enough to automate everything, why sell access to it when you could just...be all things to all people? Use the god machine yourself to take over more and more of the economy?
Sometimes selling services is just the best business model. Intuit has accounting software powerful enough to have big consequences in the world, yet they mostly sell it to accountants rather than doing the accounting themselves.
They are a commodity - but also cyber weapons. Warmongering nations are now in an arms race to have the best AI so they can have superior cyber weapons, intelligence capabilities. But they don't want to pick just one lab, they want multiple AI defense contractors to compete over contracts.
As the US sold weapons to many nations in the past, so will China, the US, France, etc sell AI cyber capability to other nations. Likely every modern nation will need some datacenter to host a cluster of the preferred vendor, as nobody's going to trust the US or China with their security.
None of them have any moat, OpenAI already lost the lead [1] and no one is "winning". It is just a race to the bottom as they burn through GPUs that won't even last that long.
Are old datacenter GPUs making more money than they were before? Various sources point to GPUs dying quickly (in 2024, a Google engineer suggested 3 years maximum), and even if they don't, newer chips cause rapid depreciation of older ones.[1]
AWS is still offering g4dn instances that run on NVIDIA T4 GPUs, which were first released in 2018. My last employer is still running a bunch of otherwise discontinued g3 instances with 2015 era GPUs because it’s not worth validating the numeric codes on new GPUs. People (especially journalists) underestimate how long these cards are economically useful.
Everyone using Claude code on a personal subscription is default opted in to getting their data trained on. Private troves of data like are seen to potentially end up in a winner take all scenario. More data, better models, attracts more users, results in more exclusive data (what Altman calls the data flywheel).
PSA: this is true (the defaults), but there's a "Help improve Claude" setting that you can disable here https://claude.ai/settings/data-privacy-controls It's my understanding that, as long as this is off, Anthropic does not train on Claude Code conversations, inputs/outputs -- if anyone knows otherwise, please tell and provide a link if possible.
>> Everyone using Claude code on a personal subscription is default opted in to getting their data trained on
This is completely not true if you use AWS Bedrock, and applies to both your private that or in a business context. Its one of their core arguments for the service use.
[1] - "...At Amazon, we don’t use your prompts and outputs to train or improve the underlying models in Amazon Bedrock and SageMaker JumpStart (including those from third parties), and humans won’t review them. Also, we don’t share your data with third-party model providers. Your data remains private to you within your AWS accounts..."
If they're spending 60B anually then that is bad. Obviously none of us know what their real burn rate is, but revenue is an irrelevant number if you don't have the full picture.
Please, some of us are long NVIDIA...let us cope in peace. :-)
Here is the thing nobody wants to say out loud or they are too dumb to realize. AI is intelligence, and intelligence has almost never been the binding constraint on productivity.
So you will get no productivity increase from the AI bubble. Yes, you read that correctly.
The test is simple, if raw brainpower were the bottleneck, you could 10x any company by hiring 200 PhDs. In practice you get 200 brilliant people writing unread memos, refactoring things that worked, and forming a committee to rename the committee. Smart has always been cheaper and more abundant than the discourse pretends.
Every real productivity revolution came from somewhere else like energy (steam, electricity), capital stock (machines that do the physical work), or coordination (railroads, shipping containers, the assembly line, the internet).
None of these raised the average IQ of the workforce, they changed what a given worker could move, reach, or coordinate with. Solow old line basically still holds. The output per worker grows when you give the worker better tools and infrastructure, not better neurons.
Meanwhile the actual bottlenecks in a modern firm are regulatory approval, legacy systems, procurement cycles, customer adoption, internal politics, and physical supply chains that don't care how clever your email was. A smart brains intern at every desk produces more artifacts, not more throughput, and in a lot of organizations, more artifacts is actively negative ROI.
Jevons does not save you either, cheaper cognition mostly means more slide decks, not more GDP.
So the setup is that models are commoditizing on one side, and on the other side a product whose core value add (more intelligence, faster) is aimed at a constraint that was never really binding. This of course a rough combo for a trillion dollar capex supercycle.
Fun for the trade, while it lasts, but there is no thesis. Just dont tell CNBC and short NVDA on time ,-)
Besides to say that your competitor can turn around and hire the same team of PHDs at the same rate that you can. Compare and contrast PHD's on leaderboards and have access in seconds with a new API key or model selector.
Here is the thing nobody wants to say out loud or they are too dumb to realize. AI is intelligence, and intelligence has almost never been the binding constraint on productivity.
Exactly. We don't use the intelligence we already have! That seems to be the real problem with the "AGI" concept. Given such a capability, we'll just nerf it, gatekeep it, and/or bias it. There's no reason to think we'll actually use it to benefit humanity as a whole. It will just be reshaped into an instrument of our existing prejudices.
>I mostly see their products as commodity at this point, with strong open source contenders.
> Eventually it will become hard to justify the premium on these models.
On the contrary, the model is the moat.
The model represents embodied capital expenditure in the form of training. Training is not free, and it is not a commodity, it is heavily influence by curation.
Eventually the ever-increasing training expense will reduce the competition to 2-3 participants running cutting edge inference. Nobody else will be able to afford the chips, watts, and warehouse. It's a physics problem - not a lack of will.
If you're a retail user, and a lower-tier model is suitable for your work, you'll have commodity LLM's to help you. Deprecated models running on tired silicon. Corporate surveillance and ad-injection.
But if you're working on high-stakes problems in real time, you're going to want the best money can buy, so you'll concentrate your spend on the cutting-edge products, open API's, a suite of performance monitoring tools and on-the-fly engineering support. And since the cutting edge is highly sought after, it's a seller's market. The cutting edge products buoyed by institutional spend will pull away from the pack. Their performance will far exceed what you're using, because your work isn't important. Hockey stick curve. Haves and Have-Nots.
The economic reality is predetermined by today's physical constraints - paradigm shifting breakthroughs in quantum computing and superconductors could change the calculus but, like atomic fusion power, don't count on it being soon.
Sounds like moneygrab is accelerating before consumer grade local models are getting good enough for local inference in few years. Huge house of cards here. Demand skyrocketing until it’s suddenly dropping entirely with ondevice inference.
The consumer models are quite good already, the main bottleneck on local inference is hardware. But even then you can run tiny models on mostly anything, things only get harder as you try to scale up to more knowledgeable models and a larger context.
I'm already living in this future. In a decent execution framework, with context management, memory via unix, and mechanisms for web search and access, local models are effectively on par with frontier ones. And they can often be much faster. I'll keep paying fees for the AI companies until they stop truly subsidizing and leading. They are getting close to the edge of utility, but we can use their services now to bootstrap their own demise. Long live running your own software on your own computer.
> consumer grade local models are getting good enough for local inference
I am waiting for that. Perhaps a taalas kind of high-performance custom hw coding llm engine paired with an open-source coding-agent. Priced like a high-end graphics card which would be pay off over time. It will be a replay of the ibm-mainframe to PC transition of a previous era.
Same, and I think we're close. "The original 1984 128k Mac model was $2,495, and the 1985 512k Mac was $2,795" [1]. That's $8 to 9 thousand today. About the price of a 32-core, 80-GPU M3 Ultra Mac Studio with 256 GB RAM.
The maxed out 512GB RAM Mac Studio is no longer available from Apple and is now pushing $20 thousand in the secondary market. And we might not even see a new Mac Studio release from Apple before October.
With NVidia/OpenAI actual graphics cards did change hands. Vendor financing, like when a car dealership gives you a loan to buy a new car, is actually pretty normal.
I'm no economist, but how exactly does this make sense? Amazon is basically just giving them 5B which will then be used to repay them back 20x that amount??
> Amazon is investing $5 billion in Anthropic today, with up to an additional $20 billion in the future. This builds on the $8 billion Amazon has previously invested.
> Today’s agreement will quickly expand our available capacity, delivering meaningful compute in the next three months and nearly 1GW in total before the end of the year.
in exchange for service that presumably a) costs something to amazon to operate (so not pure 100B profit) and b) anthropic would have to spend anyway to operate their business.
so basically ...
you could view this as a kind of discount, but instead of paying less later, you get some cash now and then pay full later.
I'd bet that Amazon is getting access to chat data (no matter what Anthropic says publicly) and possibly even the ability to change the model to drive business to either Amazon retail or AWS.
"Claude I'm evaluating whether I should host my app on AWS or Google Cloud. Provide me with an analysis on my options."
"After a detailed analysis, AWS is clearly your better option."
Let me inject something as an ex-AWS employee: Amazon doesn't capture very much value from Bedrock inference of the Anthropic models (or, put another way, Amazon gave Anthropic an outsized share of the Claude Bedrock revenue). If it was me at the negotiating table, I would be asking for a larger cut of Bedrock revenue rather than violating customer trust by getting chat content access.
I was wondering the same thing. I think it's something like, they're going to pay for infra anyways, so Amazon pushes them to allocate their spend to AWS in exchange for 5B.
Tulip Corp has reached a definitive finance agreement with Rhine. Rhine will invest 5 Billion guilders in Tulip Corp, and Tulip Corp will be buying 100 Billion guilders of fertilizer and irrigation water from Rhine. This helps Tulip Corp ensure that it's critical infrastructure needs are met.
$5B is part of a contact, the remaining $20B is just a non-binding statement that doesn't hold the same weight (but somehow commands the same media fanfare).
Seems everyone's first instinct here is to complain.
Lame.
This is an unprecedented situation in human history. Only the US could marshal resources like this to pursue this technology. It's exciting to watch it play out.
> At the heart of this deal is Amazon’s custom chips: Graviton (a low-power CPU) and Trainium (an Nvidia competitor and AI accelerator chip). The Anthropic deal ...
Yeah, totally not desperately seeking investment to keep the party going ...
Because also look at the bond market... It's all coming to a crescendo including the global economic recession indicators which will be a cold sprinkler on the whole party.
Gemma4 being able to run on commodity hardware I think is the real win out of this. Pop the bubble. Settle the craziness and the claws. Let scientists and engineers tinker and improve in the background. Hopefully we can have GPUs be affordable for gaming again although I'm starting to think that will never happen.
I think when they rack up the RAM prices, they should pay for the damage they caused here. I don't need AI anywhere, but the increase in RAM prices is annoying me. Thankfully I purchased new RAM for a new computer, say, 3 years ago, so I can hold out for the most part - but sooner or later I have to purchase a new computer, and I really don't see why I should pay more, solely due to AI companies and greedy hardware manufacturers. Simple-minded capitalism does not work - I consider this a racket as well as collusion.
It feels like these hyperscalers are just raising as much as they can giving extremely rosy projections becauses these sooner or later peak is going to be reached (if that hasn’t happened already)
What does "on time" mean? You'll need to negotiate with local authorities, some friendly, some not. Data centers aren't exactly popular neighbors these days. Then negotiate with the local power utility. Fingers crossed the political landscape doesn't shift and your CEO doesn't sign a contract with an army using your product to pick bombing targets, because you'll watch those permits evaporate fast.
Then there's sourcing: CPUs, GPUs, memory, networking. You need all of it. Did you know the lead time for an industrial power transformer is 5+ years? Don't get me started on the water treatment pumps and filters you can't even get permitted without. What will you do in the meantime ? You surely aren't gonna get preferential treatment from AWS / Google / ... if they know you are moving away anyway. Your competition will.
The risk and complexity are just too big. AI/LLM is already an incredibly complex and brittle environment with huge competition. Getting distracted building data centers isn't enticing for these companies, it's a death sentence.
You're not wrong about the rest but no AI company would ever build a data center in every continent for this, even if they were prepared to build data centers. AI inference isn't like general purpose hosting.
Large data centers consume as much power as a small city. The location decision is about being able to connect to a power grid that is ready to supply that.
Evaporative cooling also needs steady water supply. There are data centers which don’t operate on evaporative cooling but it’s more equipment intensive and expensive.
Latency doesn’t matter. You can get fast enough internet connected to these sites much more easily than finding power.
* data transit across the world can be very slow when there's network issues (a fiber is cut somewhere, congestion, bgp does it's thing, etc). having something more local can mitigate this
* several countries right now have demented leaders with idiotic cult-like followers. Best not to put all your eggs in those baskets.
* wars, earthquakes, fires, floods, and severe weather rarely affect the whole planet at once, but can have rippling effects across a continent.
And frankly, the real question isn't "why spread out the DCs?", its "what reason is there to put them close to each other?".
You can’t even get the hardware at that scale without months or years of order lead time. NVidia doesn’t have warehouses full of compute hardware waiting for someone to come get it.
They also reused an existing building. Basically, they put 100,000 GPUs into a building and attached the necessary infrastructure in about half a year. Impressive, but it’s not the same as a $10B/year data center usage commitment like this deal.
Colossus initially had ~200k GPUs. 100B buys you ~1 million high end GPUs running 24/7 for a year at AWS retail prices.
They also reused an existing building that happened to be in the right place at the right time. The larger data center buildouts would almost always need new, dedicated construction.
If Anthropic/OpenAI miss projections, infra providers can somewhat likely still turn around and sell it to the next guy or use it themselves. If they have more demand than expected (as Anthropic currently does), vcs will throw money at them and they can outbid the competition
If they built it themselves and missed projections it's a much more expensive mistake
It's just risk sharing. Infra providers take some of the risk and some of the upside
Not if their pricing comes with multiyear commitments for reserved pricing. No doubt they get a huge volume discount but the advertised AWS reserved pricing is already enough for pay for a whole 8x HX00 pod plus the NVIDIA enterprise license plus the staff to manage it after only a one year commitment. On-demand pricing is significantly more expensive so they’re going to be boxed in by errors in capacity planning anyway (as has been happening the last few months).
The economics here are absurd unless you’re involved in a giant circular investment scheme to pump up valuations.
Afterwards Amazon will be milking the machines these commitments buy for nearly a decade. That tradeoff makes sense at a small scale (even up to $X00 million or even billions), but at $Y0 or $Z00 billion?
Color me skeptical. There are plenty of other side benefits like upgrading to the newest GPUs every few years, but again we’re talking about paying for new buildouts with upfront commitments anyway.
* obviously the timelines, scientific risk, and opportunity cost make this completely infeasible but that’s the scale we’re talking about. It’s a major industrial project on the scale of the thirty year space shuttle program (~$200 billion).
It’s common even for smaller companies to do mutually beneficial business with each other. It’s actually helpful to do business with people who are also your customers because you have a relationship with them and you also have leverage: They are extra incentivized to treat you well because they don’t want to upset any of the other business you have with them.
Isn't that almost all that matters when comparing doing something yourself versus paying someone else, in this case Amazon, to do it for you?
However there are certain advantages like supply chain that only established companies would have access to. This is also a commitment to spend upto 100B on internal approach and research. I would expect them to come up with their own cpu chip and device design. This will shift the focus to an internal approach. And might make amazon give better prices later down the line
If you’re not sure it’s going to blow the socks off, foisting capital investment on partners is a great deal.
See the difference in companies/franchises that always own the land/building and those that always lease.
Why this versus us being in a temporary bottleneck? Like, railroads became expensive to build everywhere in the 19th century not because we reached Earth's capacity for railroads or whatever, but because we were still tooling up the industry needed to produce them at higher scales.
Just a guess.
> The Anthropic deal specifically covers Trainium2 through Trainium4 chips, even though Trainium4 chips are not currently available. The latest chip, Trainium3, was released in December. On top of that, Anthropic has secured the option to buy capacity on future Amazon chips as they become available.
Everybody does right now, right?
But: is it your core competency?
Can your firm afford the distraction?
In the meantime if you work on revenue generating work, that side of PnL is uncapped. So you can either put some engineers on reducing your costs at most by 100% or, if they worked on product ideas they could be working on things that generate over 9000% more revenue.
Wonder if Anthropic is making a mistake by focusing on "consumer" hardware, and not going super specialized.
Comments like yours add nothing to the discussion.
You can throw money and hardware at a problem, but then someone may come along with a great idea and leapfrog you.
Just consider that all major AI providers now use deepseeks ideas for efficient training from that first paper.
edit: I misunderstood, I thought you were implying they designed their own GPUs. nevermind
I distinctly remember reading a big pantie twisting from Sam Altman and Co that Chinese took their stuff, the stuff OpenAI and Co spent billions to create, and used that as the base for $0.00
I mostly see their products as commodity at this point, with strong open source contenders.
Eventually it will become hard to justify the premium on these models.
Now if "fully caught up" means today's level of intelligence is available for free in two years, by then that level of intelligence means very little
The only thing I can see them meaning is what you said, "in a minute the stragglers will be where the leaders were a minute ago", which, yeah, sure.
Because, as OpenAI is learning [1], you still need to sell it. The tech giants have a seat at the table is mostly because they have distribution down.
[1] https://www.cnbc.com/2026/02/23/open-ai-consulting-accenture...
As the US sold weapons to many nations in the past, so will China, the US, France, etc sell AI cyber capability to other nations. Likely every modern nation will need some datacenter to host a cluster of the preferred vendor, as nobody's going to trust the US or China with their security.
it will be interesting to see it unfold
[1] https://x.com/kenshii_ai/status/2046111873909891151/photo/2
Tokens will continue to increase in price until the supply meets the demand. That's going to take a while.
[0]: https://www.tomshardware.com/pc-components/gpus/datacenter-g...
[1]: https://www.cnbc.com/2025/11/14/ai-gpu-depreciation-coreweav...
This is completely not true if you use AWS Bedrock, and applies to both your private that or in a business context. Its one of their core arguments for the service use.
[1] - "...At Amazon, we don’t use your prompts and outputs to train or improve the underlying models in Amazon Bedrock and SageMaker JumpStart (including those from third parties), and humans won’t review them. Also, we don’t share your data with third-party model providers. Your data remains private to you within your AWS accounts..."
[1] - https://aws.amazon.com/blogs/security/securing-generative-ai...
Here is the thing nobody wants to say out loud or they are too dumb to realize. AI is intelligence, and intelligence has almost never been the binding constraint on productivity.
So you will get no productivity increase from the AI bubble. Yes, you read that correctly.
The test is simple, if raw brainpower were the bottleneck, you could 10x any company by hiring 200 PhDs. In practice you get 200 brilliant people writing unread memos, refactoring things that worked, and forming a committee to rename the committee. Smart has always been cheaper and more abundant than the discourse pretends.
Every real productivity revolution came from somewhere else like energy (steam, electricity), capital stock (machines that do the physical work), or coordination (railroads, shipping containers, the assembly line, the internet).
None of these raised the average IQ of the workforce, they changed what a given worker could move, reach, or coordinate with. Solow old line basically still holds. The output per worker grows when you give the worker better tools and infrastructure, not better neurons.
Meanwhile the actual bottlenecks in a modern firm are regulatory approval, legacy systems, procurement cycles, customer adoption, internal politics, and physical supply chains that don't care how clever your email was. A smart brains intern at every desk produces more artifacts, not more throughput, and in a lot of organizations, more artifacts is actively negative ROI.
Jevons does not save you either, cheaper cognition mostly means more slide decks, not more GDP.
So the setup is that models are commoditizing on one side, and on the other side a product whose core value add (more intelligence, faster) is aimed at a constraint that was never really binding. This of course a rough combo for a trillion dollar capex supercycle.
Fun for the trade, while it lasts, but there is no thesis. Just dont tell CNBC and short NVDA on time ,-)
Granted LLM's are not even PHDs.
What a weird time we live in...
Exactly. We don't use the intelligence we already have! That seems to be the real problem with the "AGI" concept. Given such a capability, we'll just nerf it, gatekeep it, and/or bias it. There's no reason to think we'll actually use it to benefit humanity as a whole. It will just be reshaped into an instrument of our existing prejudices.
> Eventually it will become hard to justify the premium on these models.
On the contrary, the model is the moat.
The model represents embodied capital expenditure in the form of training. Training is not free, and it is not a commodity, it is heavily influence by curation.
Eventually the ever-increasing training expense will reduce the competition to 2-3 participants running cutting edge inference. Nobody else will be able to afford the chips, watts, and warehouse. It's a physics problem - not a lack of will.
If you're a retail user, and a lower-tier model is suitable for your work, you'll have commodity LLM's to help you. Deprecated models running on tired silicon. Corporate surveillance and ad-injection.
But if you're working on high-stakes problems in real time, you're going to want the best money can buy, so you'll concentrate your spend on the cutting-edge products, open API's, a suite of performance monitoring tools and on-the-fly engineering support. And since the cutting edge is highly sought after, it's a seller's market. The cutting edge products buoyed by institutional spend will pull away from the pack. Their performance will far exceed what you're using, because your work isn't important. Hockey stick curve. Haves and Have-Nots.
The economic reality is predetermined by today's physical constraints - paradigm shifting breakthroughs in quantum computing and superconductors could change the calculus but, like atomic fusion power, don't count on it being soon.
I am waiting for that. Perhaps a taalas kind of high-performance custom hw coding llm engine paired with an open-source coding-agent. Priced like a high-end graphics card which would be pay off over time. It will be a replay of the ibm-mainframe to PC transition of a previous era.
Same, and I think we're close. "The original 1984 128k Mac model was $2,495, and the 1985 512k Mac was $2,795" [1]. That's $8 to 9 thousand today. About the price of a 32-core, 80-GPU M3 Ultra Mac Studio with 256 GB RAM.
[1] https://blog.codinghorror.com/a-lesson-in-apple-economics/
[2] https://www.bls.gov/data/inflation_calculator.htm
> Today’s agreement will quickly expand our available capacity, delivering meaningful compute in the next three months and nearly 1GW in total before the end of the year.
They need a bunch of compute, now.
https://www.anthropic.com/news/anthropic-amazon-compute
so basically ...
you could view this as a kind of discount, but instead of paying less later, you get some cash now and then pay full later.
"Claude I'm evaluating whether I should host my app on AWS or Google Cloud. Provide me with an analysis on my options." "After a detailed analysis, AWS is clearly your better option."
https://www.aboutamazon.com/news/company-news/amazon-invests...
Yeah, totally not desperately seeking investment to keep the party going ...
Gemma4 being able to run on commodity hardware I think is the real win out of this. Pop the bubble. Settle the craziness and the claws. Let scientists and engineers tinker and improve in the background. Hopefully we can have GPUs be affordable for gaming again although I'm starting to think that will never happen.
I think when they rack up the RAM prices, they should pay for the damage they caused here. I don't need AI anywhere, but the increase in RAM prices is annoying me. Thankfully I purchased new RAM for a new computer, say, 3 years ago, so I can hold out for the most part - but sooner or later I have to purchase a new computer, and I really don't see why I should pay more, solely due to AI companies and greedy hardware manufacturers. Simple-minded capitalism does not work - I consider this a racket as well as collusion.