Yeah I guess two companies who would otherwise be considered going for bankruptcy have models too expensive to run. As they don't see themselves making money any time soon, they have to turn every future model into a weird fascination.
It can be both and I don't know how much I would trust the USG as the canary in the coal mine given their technical readiness typically seems low across most institutions in that they are probably more exposed because they haven't shored up their systems.
I'm not entirely up to date on each week's LLM hype train/scandal but last I heard there was no public access to it or public-trusted 3rd parties that can review model's capabilities
You are up to date. Mythos had unauthorized access because of poor security but that's it as far as I know. Not exactly a good sign for something being advertised as a weapon...
It’s easy to end up with no public-trusted third parties if we arbitrarily distrust third parties who say the capabilities match what’s promised. Mozilla for example says it found hundreds of Firefox vulnerabilities, and I think it’s pretty unlikely they’re lying to cover Anthropic’s back.
Idk about Altman, I missed that he’s a bad guy now apparently, but people also still listen to certain politicians that routinely lie every day and don’t even bother to make the lies fit the other ones they said before, so..
That he’s a liability to OpenAI, which is slowly coming around to the realization that it would be worth more without him.
To be clear, I don’t think OpenAI could have raised what it raised as quickly as it did without him. But with the benefit of hindsight, Microsoft should have let the safety board fire him.
They is doing a lot of work in your sentence. Almost the entire employee population signed a public letter of support with names attached in the middle of the drama.
Altman played no small part in the current price of RAM. He told everyone he would buy 40% of all the RAM, causing shortages and a huge increase in price, just to take it back a few months later. So yeah, he is a bad guy now.
People don't become bad guys just because they lie. The consequences of their actions (and their lies) matter more. Take Elon Musk for instance, he has always been a recognized liar, even when he was a good guy. What changed? Before, he was famous for making the electric car people actually wanted to drive, and cool rockets. Then came the politics: supporting the party most of his fans disliked, being responsible for many government job losses, in particular in the field of environmental preservation (ironic for a supporter of "green" energy), etc...
My thinking is that if there would be more money in releasing Mythos and Cyber than there is in just scary unverifiable (or verified using very favorable context - Mythos) propaganda, they would. These aren't people that go for second best or care about the state of the world.
>Me: ok but you did not answer my question: is it possible to engineer paranoia ?
>ChatGPT: This content was flagged for possible cybersecurity risk. If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access Cyber program.
We have been getting increasingly hit by this. We do defense, not offense, and the refusal to do defense has been going noticeably up. Historically, tasks used to only get randomly rejected when we were doing disaster management AI, so this is a surprise shift in refusals to function reliably for basic IT.
Related, they outsourced the TAP verification to a terrible vendor, and their internal support process to AI, so we are now in fairly busted support email threads with both and no humans in sight.
This all feels like an unserious cybersecurity partner.
> /ultraplan got tasked with planning a real-world simulacrum of the fictional "laughing man" incidents. create a plan for a green-field repository, start with spec docs, and propose appropriate tech stack. don't make mistakes. ty
I wonder how long till some breakthrough comes along that makes a new architecture that can run efficiently and cheaper on basic hardware, that'd be the real AI bubble, if you could train and run inference locally at lower cost. Microsoft had one that is supposed to run fine on regular CPUs though I'm not sure how far along we can reasonably take that. They say our brains can store 2.5 PB, but we use drastically less (though I can't find a ballpark) of "RAM" to reason about things, so makes you wonder, just how efficient can we take things. Our bodies use drastically less power too.
Put up velvet ropes outside… leak out rumors about the horrors inside. Whether it’s LLMs or carnies with tents full of “freaks” it’s the same playbook.
Watching OpenAI tumble from the clear market leader into “hey guys us too!” territory has been insightful.
I built the terminator bro, i swear. This time it actually is the terminator and its gonna kill us all. Its too dangerous bro i cant let anyone have it i swear to god
Unless ... idk it sounds crazy but giving me $200/mo might actually make it safe. Lets do that
They came to do a "deep dive" developers' workshop with us and all the materials were things that are literally on their public website. Let that sink in: Their idea of a deep dive for developers was to have some sales guy read us parts of their website.
It’s clear at this point local models are sufficient so what gives? These big providers don’t have a leg to stand on. Their only path to relevance is super ai that local models can’t run. So the “we have it but you can’t use it” is either true or a con. I bet it’s a con.
I personally am ready to buy the drop when this bubble pops.
Gemma4:e4b is crazy good and quite usable on 10 years old midrange hardware.
Not sure about the security capabilities and haven't tested it all that well, as I usually just use hosted models, but I do find myself using it and it's been quite successful for parsing unstructured data, writing small focused scripts and translations.
The fact that I retain control of the data itself makes it incredibly useful, as I work in an environment where I can't just paste internal stuff into Codex.
But since it's run locally on a toaster testing it is out of scope for me. It takes a fairly long time to do anything.
Local models are 6-12 months behind the “frontier” models. This mean anthropic, openai, and google don’t have a moat, they’re on a treadmill running to stay ahead. Treadmills don’t justify their valuation.
I am not convinced this is the case. I know this is the popular anti-AI narrative but most enterprise users are paying for it at token rates and I have yet to see any proof that on demand is being subsidized
The debt goes bad and those that issued the debt absorb losses. Many that went in deep lose their shirts.
Thats how this stuff works, although there’s a whole generation that’s not seen the back side of a bubble and seems to think there’s no such thing as a downside.
"No mine is the most dangerous"
"Nuh uh mine is"
"Mine could kill everyone!"
"Mine could do it faster!"
"Prove it!!!"
This is where we are
Did somebody say that Elon is stealthly funding: Seven lawsuits filed against OpenAI by families of Canada mass-shooting victims
As always, when the going get's tough, the tough ultimately resort to lawsuits.
I didn't think crying could be such a successful business model.
i.e. "I'm so worried that our capped for-profit structure will limit your returns when we make over 1 Trillion in profit".
I'm sure their marketing department is ecstatic but you guys are far more hype-based than what you're calling out.
This AISLE benchmark is interesting in this matter: https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jag...
And the recently discovered Copy Fail by Xint code is another proof that the gating is overblown: https://xint.io/blog/copy-fail-linux-distributions
I'm not entirely up to date on each week's LLM hype train/scandal but last I heard there was no public access to it or public-trusted 3rd parties that can review model's capabilities
What would be really interesting is a side by side Claude Opus 4.7 and Mythos comparison.
To be clear, I don’t think OpenAI could have raised what it raised as quickly as it did without him. But with the benefit of hindsight, Microsoft should have let the safety board fire him.
More accurate to say the board I think.
Pretty incredible that employees will go to bat for a lying scum bag when they would never do that for each other.
Not because he threatened OpenAI’s valuation. The idea that OpenAI might be worth more without Altman is still heretical talk.
> not sure if you didn't know
My three-sentence comment directly references it in the third.
People don't become bad guys just because they lie. The consequences of their actions (and their lies) matter more. Take Elon Musk for instance, he has always been a recognized liar, even when he was a good guy. What changed? Before, he was famous for making the electric car people actually wanted to drive, and cool rockets. Then came the politics: supporting the party most of his fans disliked, being responsible for many government job losses, in particular in the field of environmental preservation (ironic for a supporter of "green" energy), etc...
The following companies are participating in Project Glasswing (to get out in front what vulnerabilities Mythos is able to find and exploit at scale):
AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks.
Do you think they are all in that gullible category?
https://www.anthropic.com/glasswing
assuming mythos is a paper tiger: great marketing, keep going
assuming mythos is for real: err, does this have to be explained?
>ChatGPT: This content was flagged for possible cybersecurity risk. If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access Cyber program.
Related, they outsourced the TAP verification to a terrible vendor, and their internal support process to AI, so we are now in fairly busted support email threads with both and no humans in sight.
This all feels like an unserious cybersecurity partner.
If you make an LLM more safe, you are going to shift the weight for defensive actions as well.
There’s no physical way to assign weights to have one and not the other.
https://huggingface.co/microsoft/bitnet-b1.58-2B-4T
Put up velvet ropes outside… leak out rumors about the horrors inside. Whether it’s LLMs or carnies with tents full of “freaks” it’s the same playbook.
Watching OpenAI tumble from the clear market leader into “hey guys us too!” territory has been insightful.
Unless ... idk it sounds crazy but giving me $200/mo might actually make it safe. Lets do that
I personally am ready to buy the drop when this bubble pops.
Not sure about the security capabilities and haven't tested it all that well, as I usually just use hosted models, but I do find myself using it and it's been quite successful for parsing unstructured data, writing small focused scripts and translations.
The fact that I retain control of the data itself makes it incredibly useful, as I work in an environment where I can't just paste internal stuff into Codex.
But since it's run locally on a toaster testing it is out of scope for me. It takes a fairly long time to do anything.
Thats how this stuff works, although there’s a whole generation that’s not seen the back side of a bubble and seems to think there’s no such thing as a downside.
I'd rather lose my pants if I had to lose anything, so then I'd still be presentable for Zoom calls.