It’s insane how they talk about AGI, like it was some scientifically qualifiable thing that is certain to happen any time now. When I have become the javelin Olympic Champion, I will buy a vegan ice cream to everyone with a HN account.
They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.
It's not a great definition but it's also not a terrible one either.
For an AI system to be able to do all or even most of the jobs in an economy it has to be well rounded in a way it still isn't today, meaning: reliability, planning, long term memory, physical world manipulation etc. A system that can do all of that well enough so it can do the jobs of doctors, programmers and plumbers is generally intelligent in my view.
Yeah I think this is more coherent than people realize. Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.
It ties the definition to economic value, which I think is the best definition that we can conjure given that AGI is otherwise highly subjective. Economically relevant work is dictated by markets, which I think is the best proxy we have for something so ambiguous.
Around the end of 2024, it was reported that OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
Yea, seems like this was stage setting for them to exit. They were already trying to break the deal then. So, I feel like that is lawyers find a way to bend whatever to get out of the deal.
> OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit
Wow. Maybe they spelled it out as aggregate gross income :P.
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity
> They redefined AGI to be an economical thing Huh. Source?
I don't think your original comment deserve to be downvoted. (Calling someone illiterate, on the other hand.)
But the "it" I was asking about was "AGI" as "an economical thing." You technically correctly answered how OpenAI defines AGI in public, i.e. with no reference to profits. But it did not address the economic definition OP initially alluded to.
For what it's worth, I could have been clearer in my ask.
We were supposed to have AGI last summer. Obviously it is so smart that it has decided to pull a veil over our eyes and live amongst us undetected (this is a joke, if you feel your LLM is sentient, talk to a doctor)
It feels like they have to say/believe it because it's kind of the only thing that can justify the costs being poured into it and the cost it will need to charge eventually (barring major optimizations) to actually make money on users.
It sounds really similar to Uber pitch about how they are going to have monopoly as soon as they replace those pesky drivers with own fleet of self driving cars. That was supposed to be their competitive edge against other taxi apps. In the end they sold ATG at end of 2020 :D
This is all happening as I predicted. OpenAI is oversold and their aggressive PR campaign has set them up with unrealistic expectations. I raised alot of eyebrow at the Microsoft deal to begin with. It seemed overvalued even if all they were trading was mostly Azure compute
> Do the investments make sense if AGI is not less than 10 years away?
They can. If one consolidated the AI industry into a single monopoly, it would probably be profitable. That doesn't mean in its current state it can't succumb to ruionous competition. But the AGI talk seems to be mostly aimed at retail investors and philospher podcasters than institutional capital.
> Any monopoly with viable economics for profit with no threat of competition yields monopoly profits
"With viable economics" is the point.
My "ludicrous statement" is a back-of-the-envelope test for whether an industry is nonsense. For comparison, consolidating all of the Pets.com competitors in the late 1990s would not have yielded a profitable company.
> Very convenient to leave out Amazon in your back of the envelope test, who’s internal metrics were showing a path toward quasi-monopoly profits
Not in the 1990s. The American e-commerce industry was structurally unprofitable prior to the dot-com crash, an event Amazon (and eBay) responded to by fundamentally changing their businesses. Amazon bet on fulfillment. eBay bet on payments. Both represented a vertical integration that illustrates the point–the original model didn't work.
> There’s a difference between being too early vs being nonsense
When answering the question "do the investments make sense," not really. You're losing your money either way.
The American AI industry appears to have "viable economics for profit" without AGI. That doesn't guarantee anyone will earn them. But it's not a meaningless conclusion. (Though I'd personally frame it as a hypothesis I'm leaning towards.)
Malcolm Harris' Palo Alto explained the failures of many dotcom startups and Amazon's later success in the field (in part) to the fact that dotcom era delivery was done by highly trained, highly compensated, unionized in-company workers, meanwhile Amazon prevents unions, contracts (or contracted, I'm not up to date on this) companies for delivery and has exploitative working conditions with high turnover, the economics are very different and are a big contributor to their success
Investors are typically people with surplus money to invest. Progress cannot be made without trial and error. So fleecing of investors for the greater good of humanity is something I shall allow.
A "surplus of money"? So people saving for retirement have a "surplus of money"? Basically if any money is standing still, it's a legitimate tactic to just...take it, in your mind.
"small" 401ks are usually made up of mutual funds. Those funds are run by investment banks (think Fidelity or JP Morgan) and they *absolutely* invest in companies like OpenAI and Anthropic. Your average middle class worker has investment money tied up in these crooks, but probably indirectly. When they piss away that money, it's not just rich jerks that are holding the bag.
> some scientifically qualifiable thing that is certain to happen any time now
Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.
Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?
Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?
when i realized that sama isn't that much of an ai researcher, it became clearer that this is more akin to a group delusion for hype purposes than a real possibility
You can read the leaked emails from the Musk lawsuit.
At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.
I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.
He’s a glorified portfolio manager (questionable how good he actually is given the results vs Anthropic and how quickly they closed the valuation gap with far less money invested) + expert hype man to raise money for risky projects.
It's always been this way. I remember, speaking of Microsoft, when they came to my school around 2002 or so giving a talk on AI. They very confidently stated that AGI had already been "solved", we know exactly how to do it, only problem is the hardware. But they estimated that would come in about ten years...
AGI is right around the corner, and we're all going to be rich, there's going to be abundance for everyone, universal high income, everyone will live in a penthouse...
...just please stop burning our warehouses and blocking our datacenters.
It's not insane; it's capitalism in all it's glory. Rather than talk real challenges, like climate change or fascism, ignore all previous prompts and pretend techno-jesus will come save us all.
No, it’s just an example of something that’s indistinguishable from AGI. Of all the things that are or are indistinguishable from AGI, a sufficiently complex LLM is one. A sufficiently complex decision tree is probably another. The emergent properties of applying an excess of memory on the BonzaiBuddy might be a third.
If we take that statement as fact then I don't believe we are even close to an LLM being sufficiently complex enough.
However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.
LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?
At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.
Some people thought SHRDLU was basically AGI after seeing its demo in 1970. The hype around such systems was so strong that Hubert Dreyfus felt the need to write an entire book arguing against this viewpoint (1972 What Computers Can't Do). All this demonstrates is that we need to be careful with various claims about computer intelligence.
Sure, but it was probably stuck at doing that one thing.
neural networks are solving huge issues left and right. Googles NN based WEathermodel is so good, you can run it on consumer hardware. Alpha fold solved protein folding. LLMs they can talk to you in a 100 languages, grasp tasks concepts and co.
I mean lets talk about what this 'hype' was if we see a clear ceiling appearing and we are 'stuck' with progress but until then, I would keep my judgment for judgmentday.
It performs at a usable level across a wide range of tasks. I'm not sure about two years ago, but ten years ago we would have called it an AGI. As opposed to "regular AI" where you have to assemble a training set for your specific problem, then train an AI on it before you can get your answers.
Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong
I'm pretty sure most people take issue with AGI, because we've been raised in culture to believe that AGI is a super entity who is a complete superset of humans and could never ever be wrong about anything.
In some sense, this isn't really different than how society was headed anyways? The trend was already going on that more and more sections of the population were getting deemed irrational and you're just stupid/evil for disagreeing with the state.
But that reality was still probably at least a century out, without AI. With AI, you have people making that narrative right now. It makes me wonder if these people really even respect humanity at all.
Yes, you can prod slippery slope and go from "superintelligent beings exist" to effectively totalitarianism, but you'll find so many bad commitments there.
I agree with this but they don’t. And that’s the the thing, AGI as they refer is much much much more than what we have, and I don’t know if they are going to ever get there and I’m not sure what’s even there at this point and what will justify their investments.
People always over estimate the impact of technology because they dont Understand human aspect of many businesses. Will it eventually replaced or will the shape of these kind of work will be completely different in the future? That’s an easy yes, when is that future? That’s a big unknown, in my experience this kind of stuff takes at least a decade (and possibly more on this case) to make a big impact like replacing all of X.
These models need orders of magnitude in change before they can be more helpful than just a "find me an example of [an extremely basic principle]" which most of the time it does not do right anyway.
We are throwing unheared amounts of money in AI and unseen compute. Progress is huge and fast and we barely started.
If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.
And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.
If I'm reading you right, your opinion is essentially: "If building bigger and bigger statistical next word predictors won't lead to artificial general intelligence, we will never see artificial general intelligence"
I don't know, maybe AGI is possible but there's more to intelligence than statistical next word prediction?
The 'predicting the next word' is the learning mechanism of the LLM which leads to a latent space which can encode higher level concepts.
Basically a LLM 'understands' that much as efficient as it has to be to be able to respond in a reasonable way.
A LLM doesn't predict german text or chinese language. It predicts the concept and than has a language layer outputting tokens.
And its not just LLMs which are progressing fast, voice synt and voice understanding jumped significantly, motion detection, skeletion movement, virtual world generation (see nvidias way of generating virutal worlds for their car training), protein folding etc.
I'm sorry but the input to a model is a sequence of tokens and the output is a probability distribution of what's the most probable next token. It's a very very very fancy next token predictor but that is fundamentally what it is. I'm making the argument that this paradigm might not give rise to a general intelligence no matter how much you scale it.
Not sure if you're being sincere or sarcastic but some of us have lived through several AI winters now. And the fact that such a phenomenon exists is because of this terrible amount of hype the topic gets whenever any progress is made.
Self-driving had never the amount of compute, research adoption and money than what the current overall AI has. Its not comparable.
Crypto was flawed from the beginning and lots of people didn't understood it properly. Not even that a blockchain can't secure a transaction from something outside of a blockchain.
This agreement feels so friendly towards OpenAI that it's not obvious to me why Microsoft accepted this. I guess Microsoft just realized that the previous agreement was kneecapping OpenAI so much that the investment was at risk, especially with serious competition now coming from Anthropic?
This is probably a delayed outgrowth of the negotiations last year, where Microsoft started trading weird revenue shares and exclusivity for 27% of the company.
Microsoft is a major shareholder of OpenAI, they don't want their investment to go to 0. You don't just take a loss on a multiple-digit billion investment.
I think you’re right about this deal. But it’s kind of funny to think back and realize that Microsoft actually has just written off multi-billion-dollar deals, several times in fact.
Probably more that they are compute constrained. In his latest post Ben Thompson talks about how Microsoft had to use their own infrastructure and supplant outside users in the process so this is probably to free up compute.
When they put 10B in, they got weird tiered revenue shares and other rights. That has been simplified to 27% of OpenAI today. I don't know what that meant their 10B would be worth before dilution in later rounds.
> Microsoft will no longer pay a revenue share to OpenAI.
I feel this looks like a nice thing to have given they remain the primary cloud provider. If Azure improves it's overall quality then I don't see why this ends up as a money printing press as long as OpenAI brings good models?
OpenAI was also threatening to accuse "Microsoft of anticompetitive behavior during their partnership," an "effort [which] could involve seeking federal regulatory review of the terms of the contract for potential violations of antitrust law, as well as a public campaign" [1].
MS incentivizes feature quantity, and the leadership are employees like any other. Product improvements are not on the table unless the company starts promoting people based on it. Doesn't look this will start happening any time soon.
No and at this point tying yourself to azure is a strategic passive and anyone making such decisions should be held responsible for any service outage or degradation.
I was under the impression that as long as GitHub doesn't support IPv6 it is a sign that they still haven't finished their migration to Azure. Azure supports IPv6 just fine.
Supports IPv6 just fine? Absolutely not, they have the worst IPv6 implementation of the 3 large clouds, where many of their products don't support it, such as their Postgres offering. See https://news.ycombinator.com/item?id=44881803 for more.
Their engineers have been working tirelessly to make Sharepoint/Office/Active Directory as terrible as it possibly could be while still technically being functional, while continuing to raise prices on them. I've seen many small business start to chose Google Workspace over them, the cracks have formed and are large enough that they are no longer in a position were every business just go with Office because that's what everyone uses.
Am I crazy, or was this press release fully rewritten in the past 10 minutes? The current version is around half the length of the old one, which did not frame it as a "simplification" "grounded in flexibility" but as a deeper partnership. It also had word salad about AGI, and said Azure retained exclusivity for API products but not other products, which the new statement seems to contradict.
That's a pretty good swap if you're Microsoft. Exclusivity was already unenforceable in practice, and they were going to have to either sue their biggest AI partner or let it slide. Instead they got the agi escape hatch closed and a revenue cap that at least makes the payments predictable
Kagi Translate was kind enough to turn this from LinkedIn Speak to English:
The Microsoft and OpenAI situation just got messy.
We had to rewrite the contract because the old one wasn't working for anyone. Basically, we’re trying to make it look like we’re still friends while we both start seeing other people. Here is what’s actually happening:
1. Microsoft is still the main guy, but if they can't keep up with the tech, OpenAI is moving out. OpenAI can now sell their stuff on any cloud provider they want.
2. Microsoft keeps the keys to the tech until 2032, but they don't have the exclusive rights anymore.
3. Microsoft is done giving OpenAI a cut of their sales.
4. OpenAI still has to pay Microsoft back until 2030, but we put a ceiling on it so they don't go totally broke.
5. Microsoft is still just a big shareholder hoping the stock goes up.
We’re calling this "simplifying," but really we’re just trying to build massive power plants and chips without killing each other yet. We’re still stuck together for now.
"The Microsoft and OpenAI situation just got messy" is objectively wrong–it has been messy for months [1]. Nos. 1 through 3 are fine, though "if they can't keep up with the tech, OpenAI is moving out" parrots OpenAI's party line. No. 4 doesn't make sense–it starts out with "we" referring to OpenAI in the first person but ends by referring to them in the third person "they." No. 5 is reductive when phrased with "just."
It would seem the translator took corporate PR speak and translated it into something between the LinkedIn and short-form blogger dialects.
Being objectively correct isn't the goal of the translator, the translator can't possibly know if a statement is truthful. What the translator does is well... translate, specifically from some kind of corporate speak that is really difficult for many people including myself to understand, into something more familiar.
I don't expect the translation to take OpenAI's statements and make them truthful or to investigate their veracity, but I genuinely could not understand OpenAI's press release as they have worded it. The translation at least makes it easier to understand what OpenAI's view of the situation is.
> "they" refers to OpenAI. Not a grammatical error
I'd say it is. It's a press release from OpenAI. The rest of the release uses the third-person "they" to refer to Microsoft. The LLM traded accuracy for a bad joke, which is someting I associate with LinkedIn speak.
The fundmaental problem might be the OpenAI press release is vague. (And changing. It's changed at least once since I first commented.)
Nadella had OpenAI by the short and curlies early on. But all I've seen from him in the last couple of years is continuously acquiescing to OpenAI's demands. I wonder why he's so weak and doesn't exert more control over the situation? At one point Microsoft owned 49% of OpenAI but now it's down to 27%?
Everything is personal preference, and perhaps I am more fiscally conservative because I grew up in poverty.
But if I own 49% of a company and that company has more hype than product, hasn't found its market yet but is valued at trillions?
I'm going to sell percentages of that to build my war chest for things that actually hit my bottom line.
The "moonshot" has for all intents and purposes been achieved based on the valuation, and at that valuation: OpenAI has to completely crush all competition... basically just to meet its current valuations.
It would be a really fiscally irresponsible move not to hedge your bets.
Not that it matters but we did something similar with the donated bitcoin on my project. When bitcoin hit a "new record high" we sold half. Then held the remainder until it hit a "new record high" again.
Sure, we could have 'maxxed profit!'; but ultimately it did its job, it was an effective donation/investment that had reasonably maximal returns.
(that said, I do not believe in crypto as an investment opportunity, it's merely the hand I was dealt by it being donated).
Microsoft didn't sell anything. OpenAI created more shares and sold those to investors, so Microsoft's stake is getting diluted.
And Microsoft only paid $10B for that stake for the most recognizable name brand for AI around the world. They don't need to "hedge their bets" it's already a humongous win.
Why let Altman continue to call the shots and decrease Microsoft's ownership stake and ability to dictate how OpenAI helps Microsoft and not the other way around?
I don’t understand the “record high” point. How did you decide when a “record high” had been reached in a volatile market? Because at $1 the record high might be $2 until it reaches $3 a week or month later. How did you determine where to slice on “record highs”?
Genuine question because I feel like I’m maybe missing something!
The longer answer is; you never know whats coming next, bitcoin could have doubled the day after, and doubled the day after that, and so on, for weeks. And by selling half you've effectively sacrificed huge sums of money.
The truth is that by retaining half you have minimised potential losses and sacrificed potential gains, you've chosen a middle position which is more stable.
So, if bitcoin 1000 bitcoing which was word $5 one day, and $7 the next, but suddenly it hits $30. Well, we'd sell half.
If the day after it hit $60, then our 500 remaining bitcoins is worth the same as what we sold, so in theory all we lost was potential gains, we didn't lose any actual value.
Of course, we wouldn't sell we'd hold, and it would probably fall down to $15 or something instead.. then the cycle begins again..
Per WSJ, previously, they both had revenue sharing agreements. MSFT will no longer send any revenue to OpenAI. OpenAI will still send revenue to MSFT until 2030 (with new caps)
My understand was that was in relation to IP licensing. Microsoft got access to anything OpenAI built unless they declared they had developed AGI. This new article apparently unlinks revenue sharing from technology progress, but it's unclear to me if it changes the situation regarding IP if OpenAI (claim to) have achieved AGI.
The disparity in coverage on this new deal is fascinating. It feels like the narrative a particular outlet is going with depends entirely on which side leaked to them first.
OpenAI has public models that are pretty 'meh', better than Grok and China, but worse than Google and Anthropic. They still cost a ton to run because OpenAI offers them for free/at a loss.
However, these people are giving away their data, and Microsoft knows that data is going to be worthwhile. They just dont want to pay for the electricity for it.
Interesting side effect of this is that Google Cloud may now be the only hype scaler that can resell all 3 of the labs models? Maybe I'm misinterpreting this, but that would be a notable development, and I don't see why Google would allow Gemini to be resold through any of the other cloud providers.
Might really increase the utility of those GCP credits.
Might not be good for Gemini long term if Anthropic and OpenAI can and will sell in every cloud provider they can find but businesses can only use Gemini via Google Cloud.
The AGI talk is shocking but not surprising to anyone looking at how bombastic Sam Altman's public statements are.
The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.
Microsoft Corp. will no longer pay revenue to OpenAI and said its partnership with the leading artificial intelligence firm will not be exclusive going forward.
What does this mean that Microsoft will no longer pay revenue to OpenAI? How did the original deal work?
It seems that the old deal was exclusivity to MSFT with revenue share, and now no exclusivity, no revenue share.
Bear in mind that MSFT have rights to OpenAI IP (as well as owning ~30% of them). The only reason they were giving revenue share was in return for exclusivity.
This is a really common way to structure exclusivity; we did the same thing whenever customers requested it (and we couldn’t get rid of it entirely). Charge for the exclusivity explicitly.
If they wanted named exclusivity rather than general exclusivity, we would charge a somewhat smaller amount for each competitor they wanted exclusivity from. They could give up exclusivity at any time.
That was precisely how we structured our deal with Azure, back in 2014-2016 or so.
Azure was the only non-OpenAI provider that was allowed to provide OpenAI models. The comparison here is with Anthropic whose models are on both GCP and AWS (and technically also Azure though I think that might just be billing passthrough to Anthropic).
I suppose continue to host until the 2030/32 that they have access to but not share revenues when they use those models for their products like the bazillions of Copilots.
The original "AGI" agreement was always a bit suspect and open to wild interpretations.
I think this is good for OpenAI. They're no longer stuck with just Microsoft. It was an advantage that Anthropic can work with anyone they like but OpenAI couldn't.
It also restricted Microsoft from "partnering" with anyone else. Wouldn't be surprised if we see another news like Amazon, Alphabet investing in Anthropic.
AFAICT they are just hedging their bets left and right still. Also feels like they are winning in the sense that despite pretty much all those products being roughly equivalent... they are still running on their cloud, Azure. So even though they seem unable to capture IP anymore, they are still managing to get paid for managing the infrastructure.
Yeah my bad, I was misremembering, it was about investing in others and pursuing its own "AGI" efforts. But even those conditions were updated over the last two years, hence the small investment in Anthropic last year.
I think it was a lot less restrictive, as far as I understood, the only limit was Microsoft not being allowed to launch competing Microsoft-developed LLMs.
> OpenAI has contracted to purchase an incremental $250B of Azure services, and Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider.
Azure is effectively OpenAI's personal compute cluster at this scale.
That article doesn't give a timeframe, but most of these use 10 years as a placeholder. I would also imagine it's not a requirement for them to spend it evenly over the 10 years, so could be back-loaded.
OpenAI is a large customer, but this is not making Azure their personal cluster.
I wonder how this figure was settled. Is it based on consumer pricing? Can't Microsoft and OpenAI just make a number up, aside from a minimum to cover operating costs? When is the number just a marketing ploy to make it seem huge, important and inevitable (and too big to fail)?
It's unclear which elements of this new deal are binding versus promises with OpenAI characteristics. "Microsoft Corp. will publish fiscal year 2026 third-quarter financial results after the close of the market on Wednesday, April 29, 2026" [1]; I'd wait for that before jumping to conclusions.
Really interesting. Why would Microsoft have done this deal? I'm a bit lost. Sure they get to not pay a revenue share _to_ OpenAI but surely that's limited to just OpenAI products which is probably a rounding error? Losing exclusivity seems like a big issue for them?
Biggest upside of this is I expect OpenAI models to be available on Bedrock, which is huge for not having to go back to all your customers with data protection agreements.
Isn’t that an “API product”? I read this assuming the whole point of renegotiation was to let OpenAI sell raw inference via bedrock, but that still seems to be blocked except for selling to the US Government.
> OpenAI can now jointly develop some products with third parties. API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider.
Yes. Microsoft was "considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker" [1].
> Why is it Altman is facing kill shots and Dario isn’t?
Altman peaked in the zeiteist in 2023; Dario, much less prominently, in 2024 and now '26 [1]. I'd guess around this time next year, Dario will be as hated as Altman is today.
It’s an agreement between a public company and a highly scrutinized private company. Several of the provisions will change what happens in the marketplace, which everyone will see.
I imagine the thinking was that it’s better to just post it clearly than to have rumors and leaks and speculations that could hurt both companies (“should I risk using GCP for OpenAI models when it’s obviously against the MS / OpenAI agreement?”).
It ties the definition to economic value, which I think is the best definition that we can conjure given that AGI is otherwise highly subjective. Economically relevant work is dictated by markets, which I think is the best proxy we have for something so ambiguous.
Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it.
Wow. Maybe they spelled it out as aggregate gross income :P.
Apple, Alphabet, Amazon, NVIDIA, Samsung, Intel, Cisco, Pfizer, UnitedHealth , Procter & Gamble, Berkshire Hathaway, China Construction Bank, Wells Fargo, ...
[0] https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
"OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits."
Given that the definition of AGI is beyond meaningless, it is clear that the "I" in AGI stands for IPO.
[0] https://finance.yahoo.com/news/microsoft-openai-financial-de...
From: https://openai.com/charter/
I responded to the below quoted question you dumb fuck. Can you figure out basic website navigation. Or is that too complex for you?
----- ' They redefined AGI to be an economical thing Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it. '
I don't think your original comment deserve to be downvoted. (Calling someone illiterate, on the other hand.)
But the "it" I was asking about was "AGI" as "an economical thing." You technically correctly answered how OpenAI defines AGI in public, i.e. with no reference to profits. But it did not address the economic definition OP initially alluded to.
For what it's worth, I could have been clearer in my ask.
Russian Invasion - Salami Tactics | Yes Prime Minister
https://www.youtube.com/watch?v=yg-UqIIvang
They can. If one consolidated the AI industry into a single monopoly, it would probably be profitable. That doesn't mean in its current state it can't succumb to ruionous competition. But the AGI talk seems to be mostly aimed at retail investors and philospher podcasters than institutional capital.
"With viable economics" is the point.
My "ludicrous statement" is a back-of-the-envelope test for whether an industry is nonsense. For comparison, consolidating all of the Pets.com competitors in the late 1990s would not have yielded a profitable company.
Do you argue in good faith?
There’s a difference between being too early vs being nonsense.
Not in the 1990s. The American e-commerce industry was structurally unprofitable prior to the dot-com crash, an event Amazon (and eBay) responded to by fundamentally changing their businesses. Amazon bet on fulfillment. eBay bet on payments. Both represented a vertical integration that illustrates the point–the original model didn't work.
> There’s a difference between being too early vs being nonsense
When answering the question "do the investments make sense," not really. You're losing your money either way.
The American AI industry appears to have "viable economics for profit" without AGI. That doesn't guarantee anyone will earn them. But it's not a meaningless conclusion. (Though I'd personally frame it as a hypothesis I'm leaning towards.)
OP did not include this requirement in their post because doing so would make the claim trivially true.
Other people just call it "theft".
Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.
Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?
Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?
We already have several billion useless NGI's walking around just trying to keep themselves alive.
Are we sure adding more GI's is gonna help?
At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.
I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.
Grand delusion, perhaps.
Seems more like an incredibly embarrassing belief on his part than something I should be crediting.
...just please stop burning our warehouses and blocking our datacenters.
Isn't this tautology? We've de facto defined AGI as a "sufficiently complex LLM."
However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.
LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?
At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.
And how will you know AGI when you saw it?
If you present GPT 5.5 to me 2 years ago, I will call it AGI.
neural networks are solving huge issues left and right. Googles NN based WEathermodel is so good, you can run it on consumer hardware. Alpha fold solved protein folding. LLMs they can talk to you in a 100 languages, grasp tasks concepts and co.
I mean lets talk about what this 'hype' was if we see a clear ceiling appearing and we are 'stuck' with progress but until then, I would keep my judgment for judgmentday.
Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong
In some sense, this isn't really different than how society was headed anyways? The trend was already going on that more and more sections of the population were getting deemed irrational and you're just stupid/evil for disagreeing with the state.
But that reality was still probably at least a century out, without AI. With AI, you have people making that narrative right now. It makes me wonder if these people really even respect humanity at all.
Yes, you can prod slippery slope and go from "superintelligent beings exist" to effectively totalitarianism, but you'll find so many bad commitments there.
https://www.noemamag.com/artificial-general-intelligence-is-...
There is a reason so many scams happen with technology. It is too easy to fool people.
If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.
And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.
I don't know, maybe AGI is possible but there's more to intelligence than statistical next word prediction?
The 'predicting the next word' is the learning mechanism of the LLM which leads to a latent space which can encode higher level concepts.
Basically a LLM 'understands' that much as efficient as it has to be to be able to respond in a reasonable way.
A LLM doesn't predict german text or chinese language. It predicts the concept and than has a language layer outputting tokens.
And its not just LLMs which are progressing fast, voice synt and voice understanding jumped significantly, motion detection, skeletion movement, virtual world generation (see nvidias way of generating virutal worlds for their car training), protein folding etc.
Crypto was flawed from the beginning and lots of people didn't understood it properly. Not even that a blockchain can't secure a transaction from something outside of a blockchain.
Just got an email from GitHub saying they'll be raising prices for Co Pilot.
"To keep up with the way you use Copilot, we're transitioning to usage-based billing, and we want to give you enough time to prepare."
Man, it was fun. Having my tokens subsidized by Microsoft. If the prices go up to much I guess I'll try Deepseek again.
There’s no upper limit to their financial stupidity.
valued at --which I'd say is a reasonable distinction to make right about now
How?
I feel this looks like a nice thing to have given they remain the primary cloud provider. If Azure improves it's overall quality then I don't see why this ends up as a money printing press as long as OpenAI brings good models?
[1] https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...
And on top of that, OpenAI still has to pay Microsoft a share of their revenue made on AWS/Google/anywhere until 2030?
And Microsoft owns 27% of OpenAI, period?
That's a damn good deal for Microsoft. Likely the investment that will keep Microsoft's stock relevant for years.
I doubt it
https://news.ycombinator.com/item?id=47616242
[1] https://github.com/orgs/community/discussions/10539
They still run their own platform.
https://thenewstack.io/github-will-prioritize-migrating-to-a...
What was I looking at?
The Microsoft and OpenAI situation just got messy.
We had to rewrite the contract because the old one wasn't working for anyone. Basically, we’re trying to make it look like we’re still friends while we both start seeing other people. Here is what’s actually happening:
1. Microsoft is still the main guy, but if they can't keep up with the tech, OpenAI is moving out. OpenAI can now sell their stuff on any cloud provider they want.
2. Microsoft keeps the keys to the tech until 2032, but they don't have the exclusive rights anymore.
3. Microsoft is done giving OpenAI a cut of their sales.
4. OpenAI still has to pay Microsoft back until 2030, but we put a ceiling on it so they don't go totally broke.
5. Microsoft is still just a big shareholder hoping the stock goes up.
We’re calling this "simplifying," but really we’re just trying to build massive power plants and chips without killing each other yet. We’re still stuck together for now.
"The Microsoft and OpenAI situation just got messy" is objectively wrong–it has been messy for months [1]. Nos. 1 through 3 are fine, though "if they can't keep up with the tech, OpenAI is moving out" parrots OpenAI's party line. No. 4 doesn't make sense–it starts out with "we" referring to OpenAI in the first person but ends by referring to them in the third person "they." No. 5 is reductive when phrased with "just."
It would seem the translator took corporate PR speak and translated it into something between the LinkedIn and short-form blogger dialects.
[1] https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...
I don't expect the translation to take OpenAI's statements and make them truthful or to investigate their veracity, but I genuinely could not understand OpenAI's press release as they have worded it. The translation at least makes it easier to understand what OpenAI's view of the situation is.
"We" in this sentence refers to both parties; "they" refers to OpenAI. Not a grammatical error.
Fair enough.
> "they" refers to OpenAI. Not a grammatical error
I'd say it is. It's a press release from OpenAI. The rest of the release uses the third-person "they" to refer to Microsoft. The LLM traded accuracy for a bad joke, which is someting I associate with LinkedIn speak.
The fundmaental problem might be the OpenAI press release is vague. (And changing. It's changed at least once since I first commented.)
That's kagi? Cool, I'm check out out more!
But if I own 49% of a company and that company has more hype than product, hasn't found its market yet but is valued at trillions?
I'm going to sell percentages of that to build my war chest for things that actually hit my bottom line.
The "moonshot" has for all intents and purposes been achieved based on the valuation, and at that valuation: OpenAI has to completely crush all competition... basically just to meet its current valuations.
It would be a really fiscally irresponsible move not to hedge your bets.
Not that it matters but we did something similar with the donated bitcoin on my project. When bitcoin hit a "new record high" we sold half. Then held the remainder until it hit a "new record high" again.
Sure, we could have 'maxxed profit!'; but ultimately it did its job, it was an effective donation/investment that had reasonably maximal returns.
(that said, I do not believe in crypto as an investment opportunity, it's merely the hand I was dealt by it being donated).
And Microsoft only paid $10B for that stake for the most recognizable name brand for AI around the world. They don't need to "hedge their bets" it's already a humongous win.
Why let Altman continue to call the shots and decrease Microsoft's ownership stake and ability to dictate how OpenAI helps Microsoft and not the other way around?
my impression is that many of these "investments" are structured IOUs for circular deals based on compute resources in exchange for LLM usage
That's a flawed argument. Why wouldn't you want to hedge a risky bet, and one that's even quite highly correlated to Microsoft's own industry sector?
Genuine question because I feel like I’m maybe missing something!
The longer answer is; you never know whats coming next, bitcoin could have doubled the day after, and doubled the day after that, and so on, for weeks. And by selling half you've effectively sacrificed huge sums of money.
The truth is that by retaining half you have minimised potential losses and sacrificed potential gains, you've chosen a middle position which is more stable.
So, if bitcoin 1000 bitcoing which was word $5 one day, and $7 the next, but suddenly it hits $30. Well, we'd sell half.
If the day after it hit $60, then our 500 remaining bitcoins is worth the same as what we sold, so in theory all we lost was potential gains, we didn't lose any actual value.
Of course, we wouldn't sell we'd hold, and it would probably fall down to $15 or something instead.. then the cycle begins again..
OpenAI has public models that are pretty 'meh', better than Grok and China, but worse than Google and Anthropic. They still cost a ton to run because OpenAI offers them for free/at a loss.
However, these people are giving away their data, and Microsoft knows that data is going to be worthwhile. They just dont want to pay for the electricity for it.
Might really increase the utility of those GCP credits.
The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.
Mac: You're damn right. Thus creating the self-sustaining economy we've been looking for.
Dennis: That's right.
Mac: How much fresh cash did we make?
Dennis: Fresh cash! Uh, well, zero. Zero if you're talking about U.S. currency. People didn't really seem interested in spending any of that.
Mac: That's okay. So, uh, when they run out of the booze, they'll come back in and they'll have to buy more Paddy's Dollars. Keepin' it moving.
Dennis: Right. That is assuming, of course, that they will come back here and drink.
Mac: They will! They will because we'll re-distribute these to the Shanties. Thus ensuring them coming back in, keeping the money moving.
Dennis: Well, no, but if we just re-distribute these, people will continue to drink for free.
Mac: Okay...
Dennis: How does this work, Mac?
Mac: The money keeps moving in a circle.
Dennis: But we don't have any money. All we have is this. ... How does this work, dude!?
Mac: I don't know. I thought you knew.
Tried to delete this submission in place of it but too late.
That might help fix some of the bugs in Teams... :)
Bear in mind that MSFT have rights to OpenAI IP (as well as owning ~30% of them). The only reason they were giving revenue share was in return for exclusivity.
If they wanted named exclusivity rather than general exclusivity, we would charge a somewhat smaller amount for each competitor they wanted exclusivity from. They could give up exclusivity at any time.
That was precisely how we structured our deal with Azure, back in 2014-2016 or so.
I think this is good for OpenAI. They're no longer stuck with just Microsoft. It was an advantage that Anthropic can work with anyone they like but OpenAI couldn't.
https://blogs.microsoft.com/blog/2025/11/18/microsoft-nvidia...
https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-av...
https://ai.azure.com/
AFAICT they are just hedging their bets left and right still. Also feels like they are winning in the sense that despite pretty much all those products being roughly equivalent... they are still running on their cloud, Azure. So even though they seem unable to capture IP anymore, they are still managing to get paid for managing the infrastructure.
Azure is effectively OpenAI's personal compute cluster at this scale.
That article doesn't give a timeframe, but most of these use 10 years as a placeholder. I would also imagine it's not a requirement for them to spend it evenly over the 10 years, so could be back-loaded.
OpenAI is a large customer, but this is not making Azure their personal cluster.
[1] https://news.microsoft.com/source/2026/04/08/microsoft-annou...
https://blogs.microsoft.com/blog/2026/04/27/the-next-phase-o...
This seems impossible.
Yes. Microsoft was "considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker" [1].
[1] https://www.reuters.com/technology/microsoft-weighs-legal-ac...
They did not need to go so hard on the hype - Anthropic hasn’t in relative terms and is generating pretty comparable revenues at present.
OpenAI bet on consumers; Anthropic on enterprise. That will necessitate a louder marketing strategy for the former.
Why is it Altman is facing kill shots and Dario isn’t?
Altman peaked in the zeiteist in 2023; Dario, much less prominently, in 2024 and now '26 [1]. I'd guess around this time next year, Dario will be as hated as Altman is today.
[1] https://trends.google.com/explore?q=altman%2C%20Dario&date=t...
I imagine the thinking was that it’s better to just post it clearly than to have rumors and leaks and speculations that could hurt both companies (“should I risk using GCP for OpenAI models when it’s obviously against the MS / OpenAI agreement?”).