So Opus 4.7 is measurably worse at long-context retrieval compared to Opus 4.6. Opus 4.6 scores 91.9% and Opus 4.7 scores 59.2%. At least they're transparent about the model degradation. They traded long-context retrieval for better software engineering and math scores.
To be honest, I think it's just a more honest score of what Opus 4.6 actually was. Once contexts get sufficiently large, Opus develops pretty bad short term memory loss.
Agreed, I appreciate the transparency (and Anthropic isn't normally very transparent). It's also great to know because I will change how I approach long contexts knowing it struggles more with them.
Could this be because they've found the 1m context uneconomical (ie costs too much to serve, or burns through users quota too quickly causing complaints), and so they're no longer targeting it as a goal
The benchmark GP mentioned is measuring at 128k-256k context (there's another at 524k-1024k, where 4.6 scored 78.3% and 4.7 scored 32.2%).
The longer the context the worse the performance; there isn't really a qualitative step change in capability (if there is imo it happens at like 8k-16k tokens, much sooner than is relevant for multi-turn coding tasks - see e.g. this old benchmark https://github.com/adobe-research/NoLiMa ).
> Chemical and biological weapons threat model 2 (CB-2): Novel chemical/biological weapons production capabilities. A model has CB-2 capabilities if it has the ability to significantly help threat actors (for example, moderately resourced expert-backed teams) create/obtain and deploy chemical and/or biological weapons with potential for catastrophic damages far beyond those of past catastrophes such as COVID-19.
That's an interesting choice of benchmark for measuring the risk of "Chemical and biological weapons"
I never understand these critiques. If something is useful and you’re selling it, does that mean any technical document describing its usefulness becomes marketing?
I guess maybe, but then do those documents lose value as technical documents? Not necessarily at all, so I don’t see the point. How are you supposed to describe a useful technical thing to users?
Have they effectively communicated what a 20x or 10x Claude subscription actually means? And with Claude 4.7 increasing usage by 1.35x does that mean a 20x plan is now really a 13x plan (no token increase on the subscription) or a 27x plan (more tokens given to compensate for more computer cost) relative to Claude Opus 4.6?
They have communicated it as 5x is 5 x Pro, and 20x is 20 x Pro (I haven’t looked lately so not sure if that’s changed).
They have also repeatedly communicated that the base unit (Pro allotment) is subject to change and does change often.
As far as I can tell, that implies there is no guarantee that those subscriptions get some specific number of tokens per unit of time. It’s not a claim they make.
Dumb question but why are chemical weapons always addressed as a risk with llms? Is the idea that they contain how to make chemical weapons or that they would guide someone on how?
Would there not already be websites that contain that information? How is an llm different, i guess, from some sort of anarchist cookbook thing.
Both. There's the risk of them instructing a user on how to produce a known formulation (the Anarchist Cookbook solution, as you say), which is irritating but not that problematic.
The bigger issue is that they are potentially capable of producing novel formulations capable of producing harm, and guiding someone through this process. That is, consider a world in which someone with malicious desires has access to a model as capable at chemistry / biology as Mythos is at offensive cybersecurity abilities.
This is obviously limited by the fact that the models don't operate in the physical world, but there's plenty of written material out there.
"Smart people have economic opportunities that align them away from being evil"
For some definition of evil, some of the time, ok. But as economic opportunities compound (looking at the behavior of the ultra-rich), it seems there's at least strong correlation in the other direction, if not full-on "root of all evil" causation.
Sure, but that’s not “slaughter a stadium of people with drones” evil or “poison the water supply” evil or “take out unprotected electrical substations” evil.
So much infrastructure is very soft because the evil people aren’t smart enough to conceive of or conduct an attack.
Good. This is how we will force the world to reckon with the isolated, the disgruntled, and "lone wolf" terrorist. Real "sigma males" actually exist, and when they decide "society has to pay" we are all worse off for it. If Ted Kaczynski (quintessential example of a real actual sigma) had been in his prime operating right now, he'd have mail-bombed NeurIPS and ICLR already. I'm not cool with being in crowds of AI professionals right now for physical security reasons given the extreme anti-AI sentiment that exists from nearly everyone outside of the valley: https://jonready.com/blog/posts/everyone-in-seattle-hates-ai...
That’s not quite true. Take a look at all the billionaires destroying society. Being evil is the surest way to get to get rich. In fact it’s the only way to amass that level of capital: there’s no ethical billionaire.
This feels like a wild overgeneralization. People can become rich without resorting to evil methods, especially now with global markets and software. Case in point: Minecraft was wildly successful, and now Notch is a billionaire.
LLMs can tell you exactly how to acquire the materials and manufacture the materials. They might even come up with novel formulations that rely on substances that are easier to get. There might be information about this stuff online but LLMs are much better than random idiots at adapting that information to their actual situation.
On top of LLMs reducing the cost/difficulty, the other reason biological and chemical weapons are such a worry is their asymmetric character — they are much much easier and cheaper to produce and deploy than they are to defend against.
> The technical error that caused accidental chain-of-thought supervision in some prior models (including Mythos Preview) was also present during the training of Claude Opus 4.7, affecting 7.8% of episodes.
Haiku not getting an update is becoming telling. I suspect we are reaching a point where the low end models are cannibalizing high end and that isn't going to stop. How will these companies make money in a few years when even the smallest models are amazing?
It seems to be a rule that older models are more expensive than newer ones. The low end models have higher $CPT and worse output. I wonder if the move is to just have one model and quantize if you hit compute constraints
> It seems to be a rule that older models are more expensive than newer ones.
It isn't. Gemini has gotten more expensive with each release. Anthropic has stayed pretty similar over time, no? When is the last time OpenAI dropped API prices? OpenAI started very high because they were the first, so there was a ton of low hanging fruit and there was much room to drop.
The Gemma models are at this point. A 31B model that can fit on a consumer card is as good as Sonnet 4.5. I haven't put it through as much on the coding front or tool calling as I have the Claude or GPT models, but for text processing it is on par with the frontier models.
I think one area I find hard to get around is context length. Everything self hosted is so limited on length that it is marginal to use. Additionally I think that the tools (like claude code) are clearly in the training mix for Anthropic's models so they seem to get a boost over other models pushed into that environment. That being said, open source and local inference is -really- good and only going to get better. There is no doubt that the current frontier biz model is not sustainable.
Model Welfare?
Are they serious about this? Or is it just more hype?
I really don't trust anything this company says anymore.
"We have a model that is too dangerous to release" is like me saying that I have a billion dollars in gold that nobody is allowed to see but I expect to be able to borrow against it.
The longer the context the worse the performance; there isn't really a qualitative step change in capability (if there is imo it happens at like 8k-16k tokens, much sooner than is relevant for multi-turn coding tasks - see e.g. this old benchmark https://github.com/adobe-research/NoLiMa ).
That's an interesting choice of benchmark for measuring the risk of "Chemical and biological weapons"
I guess maybe, but then do those documents lose value as technical documents? Not necessarily at all, so I don’t see the point. How are you supposed to describe a useful technical thing to users?
They have also repeatedly communicated that the base unit (Pro allotment) is subject to change and does change often.
As far as I can tell, that implies there is no guarantee that those subscriptions get some specific number of tokens per unit of time. It’s not a claim they make.
Would there not already be websites that contain that information? How is an llm different, i guess, from some sort of anarchist cookbook thing.
The bigger issue is that they are potentially capable of producing novel formulations capable of producing harm, and guiding someone through this process. That is, consider a world in which someone with malicious desires has access to a model as capable at chemistry / biology as Mythos is at offensive cybersecurity abilities.
This is obviously limited by the fact that the models don't operate in the physical world, but there's plenty of written material out there.
1. Smart people have economic opportunities that align them away from being evil
2. People who are evil tend not to be smart.
We're breaking both of these assumptions.
For some definition of evil, some of the time, ok. But as economic opportunities compound (looking at the behavior of the ultra-rich), it seems there's at least strong correlation in the other direction, if not full-on "root of all evil" causation.
So much infrastructure is very soft because the evil people aren’t smart enough to conceive of or conduct an attack.
On top of LLMs reducing the cost/difficulty, the other reason biological and chemical weapons are such a worry is their asymmetric character — they are much much easier and cheaper to produce and deploy than they are to defend against.
>_>
It isn't. Gemini has gotten more expensive with each release. Anthropic has stayed pretty similar over time, no? When is the last time OpenAI dropped API prices? OpenAI started very high because they were the first, so there was a ton of low hanging fruit and there was much room to drop.
This comparison shows them neck and neck https://benchlm.ai/compare/claude-sonnet-4-5-vs-gemma-4-31b
As Does this one https://llm-stats.com/models/compare/claude-sonnet-4-6-vs-ge...
And the pelican benchmark even shows them pretty close https://simonwillison.net/2026/Apr/2/gemma-4/ https://simonwillison.net/2025/Sep/29/claude-sonnet-4-5/
Also this isn't a fringe statement, you can see most people who have done an evaluation agree with me