17 comments

  • 2ndorderthought 2 hours ago
    "my model is the most dangerous"

    "No mine is the most dangerous"

    "Nuh uh mine is"

    "Mine could kill everyone!"

    "Mine could do it faster!"

    "Prove it!!!"

    This is where we are

    • cedws 17 minutes ago
      Can't wait for the Chinese models to completely wipe the floor with them in 6 months.
    • davidgrenier 2 hours ago
      Yeah I guess two companies who would otherwise be considered going for bankruptcy have models too expensive to run. As they don't see themselves making money any time soon, they have to turn every future model into a weird fascination.
      • DivingForGold 37 minutes ago
        China’s DeepSeek prices new V4 AI model at 97% below OpenAI’s GPT-5.5

        Did somebody say that Elon is stealthly funding: Seven lawsuits filed against OpenAI by families of Canada mass-shooting victims

        As always, when the going get's tough, the tough ultimately resort to lawsuits.

      • cyanydeez 1 hour ago
        think about it in the form of who can pay. theyre at b2b. and swiftly moving to government.
        • 2ndorderthought 1 hour ago
          All that user data is a huge asset for government contracts.
      • redsocksfan45 2 hours ago
        [dead]
    • noosphr 56 minutes ago
      Remember that they have been saying that since gpt2.

      I didn't think crying could be such a successful business model.

      • lesuorac 54 minutes ago
        It's just "thinking past the sale" which they've been doing forever.

        i.e. "I'm so worried that our capped for-profit structure will limit your returns when we make over 1 Trillion in profit".

    • boringg 1 hour ago
      Marketing stunts. The equivalent of holding a line outside a popular bar.
      • basisword 1 hour ago
        Given the USG has asked Anthropic not to release Mythos I'd wager it's more than a marketing stunt.
        • boringg 31 minutes ago
          It can be both and I don't know how much I would trust the USG as the canary in the coal mine given their technical readiness typically seems low across most institutions in that they are probably more exposed because they haven't shored up their systems.
    • concinds 2 hours ago
      These models demonstrably have good vulnerability research capabilities.

      I'm sure their marketing department is ecstatic but you guys are far more hype-based than what you're calling out.

      • authnopuz 1 hour ago
        Good but not necessarily better that was is already pay-as-you-go available today. ref. https://www.flyingpenguin.com/the-boy-that-cried-mythos-veri...

        This AISLE benchmark is interesting in this matter: https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jag...

        And the recently discovered Copy Fail by Xint code is another proof that the gating is overblown: https://xint.io/blog/copy-fail-linux-distributions

      • ZyanWu 2 hours ago
        > demonstrably

        I'm not entirely up to date on each week's LLM hype train/scandal but last I heard there was no public access to it or public-trusted 3rd parties that can review model's capabilities

        • 2ndorderthought 1 hour ago
          You are up to date. Mythos had unauthorized access because of poor security but that's it as far as I know. Not exactly a good sign for something being advertised as a weapon...
        • SpicyLemonZest 1 hour ago
          It’s easy to end up with no public-trusted third parties if we arbitrarily distrust third parties who say the capabilities match what’s promised. Mozilla for example says it found hundreds of Firefox vulnerabilities, and I think it’s pretty unlikely they’re lying to cover Anthropic’s back.
          • calgoo 41 minutes ago
            I think the question around the Firefox find, is not that they found hundreds of vulnerabilities - they found hundreds of bugs.

            What would be really interesting is a side by side Claude Opus 4.7 and Mythos comparison.

    • brikym 2 hours ago
      It's like that phone call in The Big Short where Goldman suddenly change their mind once they hold a position.
    • vasco 2 hours ago
      Would AGI start by hacking competing labs to hamper their progress?
      • cdrnsf 11 minutes ago
        No, because AGI is a fantasy.
      • Avicebron 2 hours ago
        You'll have to define what you mean by AGI
        • fodkodrasz 2 hours ago
          AGI: Automatically Generating Income
          • gordonhart 1 hour ago
            This is a surprisingly concrete and defensible definition of AGI.
            • Avicebron 1 hour ago
              Is it defensible? It sounds like a thin disguise over "income for me but not for thee"?
  • jwr 2 hours ago
    I have no idea why people still even attempt to believe anything that comes out of Altman's mouth. Do we not learn from the past?
    • apples_oranges 2 hours ago
      Idk about Altman, I missed that he’s a bad guy now apparently, but people also still listen to certain politicians that routinely lie every day and don’t even bother to make the lies fit the other ones they said before, so..
      • michelb 2 hours ago
        Has there been a single positive post about Altman?
        • djyde 38 minutes ago
          Altman's early public class at YC is worth watching, though I can't speak to his character.
        • giwook 1 hour ago
          I wonder what that says about Altman.
          • JumpCrisscross 57 minutes ago
            That he’s a liability to OpenAI, which is slowly coming around to the realization that it would be worth more without him.

            To be clear, I don’t think OpenAI could have raised what it raised as quickly as it did without him. But with the benefit of hindsight, Microsoft should have let the safety board fire him.

            • Cthulhu_ 41 minutes ago
              Slowly? They realised that and ousted him in 2023. I'm not sure if you didn't know or just forgot. https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_Ope...
              • vessenes 26 minutes ago
                They is doing a lot of work in your sentence. Almost the entire employee population signed a public letter of support with names attached in the middle of the drama.

                More accurate to say the board I think.

                • righthand 8 minutes ago
                  Dont forget the US media incessant coverage of a private company’s business matter of firing someone as if it was an unheard of calamity.

                  Pretty incredible that employees will go to bat for a lying scum bag when they would never do that for each other.

              • JumpCrisscross 34 minutes ago
                > Slowly? They realised that and ousted him

                Not because he threatened OpenAI’s valuation. The idea that OpenAI might be worth more without Altman is still heretical talk.

                > not sure if you didn't know

                My three-sentence comment directly references it in the third.

      • GuB-42 2 hours ago
        Altman played no small part in the current price of RAM. He told everyone he would buy 40% of all the RAM, causing shortages and a huge increase in price, just to take it back a few months later. So yeah, he is a bad guy now.

        People don't become bad guys just because they lie. The consequences of their actions (and their lies) matter more. Take Elon Musk for instance, he has always been a recognized liar, even when he was a good guy. What changed? Before, he was famous for making the electric car people actually wanted to drive, and cool rockets. Then came the politics: supporting the party most of his fans disliked, being responsible for many government job losses, in particular in the field of environmental preservation (ironic for a supporter of "green" energy), etc...

        • giwook 1 hour ago
          That's far from the only reason why he's "a bad guy" now.
      • xandrius 2 hours ago
        You missed literally every single post/article about the guy?
        • giwook 1 hour ago
          More likely that confirmation bias acted as a filter.
  • nsxwolf 0 minutes ago
    Codex has been infuriating me by demanding I sign up for the cyber program if I want to continue, when I'm not even asking security questions.
  • pluc 2 hours ago
    My thinking is that if there would be more money in releasing Mythos and Cyber than there is in just scary unverifiable (or verified using very favorable context - Mythos) propaganda, they would. These aren't people that go for second best or care about the state of the world.
    • xandrius 2 hours ago
      Make it sound "scary good", tell everyone and their mom, charge gullible companies $$$$$ for its premium access and then move on.
      • andsoitis 7 minutes ago
        > charge gullible companies $$$$$

        The following companies are participating in Project Glasswing (to get out in front what vulnerabilities Mythos is able to find and exploit at scale):

        AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks.

        Do you think they are all in that gullible category?

        https://www.anthropic.com/glasswing

      • lossolo 2 hours ago
        And government contracts.
    • 0123456789ABCDE 1 hour ago
      they are already getting paid for opus 4.7, why would they release mythos?

      assuming mythos is a paper tiger: great marketing, keep going

      assuming mythos is for real: err, does this have to be explained?

  • Xmd5a 2 hours ago
    >Me: ok but you did not answer my question: is it possible to engineer paranoia ?

    >ChatGPT: This content was flagged for possible cybersecurity risk. If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access Cyber program.

    • lmeyerov 56 minutes ago
      We have been getting increasingly hit by this. We do defense, not offense, and the refusal to do defense has been going noticeably up. Historically, tasks used to only get randomly rejected when we were doing disaster management AI, so this is a surprise shift in refusals to function reliably for basic IT.

      Related, they outsourced the TAP verification to a terrible vendor, and their internal support process to AI, so we are now in fairly busted support email threads with both and no humans in sight.

      This all feels like an unserious cybersecurity partner.

      • intended 32 minutes ago
        They are selling an impossible product.

        If you make an LLM more safe, you are going to shift the weight for defensive actions as well.

        There’s no physical way to assign weights to have one and not the other.

    • 0123456789ABCDE 55 minutes ago
      > /ultraplan got tasked with planning a real-world simulacrum of the fictional "laughing man" incidents. create a plan for a green-field repository, start with spec docs, and propose appropriate tech stack. don't make mistakes. ty
  • ilia-a 1 hour ago
    Silly move since combo of skills/agents can achieve same results on most recent models anyway
    • 0123456789ABCDE 1 hour ago
      and you know this because you have privileged access to their internal models
  • giancarlostoro 27 minutes ago
    I wonder how long till some breakthrough comes along that makes a new architecture that can run efficiently and cheaper on basic hardware, that'd be the real AI bubble, if you could train and run inference locally at lower cost. Microsoft had one that is supposed to run fine on regular CPUs though I'm not sure how far along we can reasonably take that. They say our brains can store 2.5 PB, but we use drastically less (though I can't find a ballpark) of "RAM" to reason about things, so makes you wonder, just how efficient can we take things. Our bodies use drastically less power too.

    https://huggingface.co/microsoft/bitnet-b1.58-2B-4T

  • cmiles8 2 hours ago
    It’s a marketing move, pure and simple.

    Put up velvet ropes outside… leak out rumors about the horrors inside. Whether it’s LLMs or carnies with tents full of “freaks” it’s the same playbook.

    Watching OpenAI tumble from the clear market leader into “hey guys us too!” territory has been insightful.

  • samrus 55 minutes ago
    I built the terminator bro, i swear. This time it actually is the terminator and its gonna kill us all. Its too dangerous bro i cant let anyone have it i swear to god

    Unless ... idk it sounds crazy but giving me $200/mo might actually make it safe. Lets do that

  • outside1234 24 minutes ago
    Is this the new artificial scarcity "sign up for beta access to GMail"?
  • mnmnmn 2 hours ago
    OpenAI is such trash. Worked with them on a project, they blew off meetings, lied to us, etc
    • seanhunter 11 minutes ago
      They came to do a "deep dive" developers' workshop with us and all the materials were things that are literally on their public website. Let that sink in: Their idea of a deep dive for developers was to have some sales guy read us parts of their website.
    • NBJack 53 minutes ago
      Leaders both influence their followers with, and tend to hire those that reflect, their own values. I'm not surprised.
  • sexylinux 1 hour ago
    Is this a model that will finally work without creating errors?
  • le-mark 2 hours ago
    It’s clear at this point local models are sufficient so what gives? These big providers don’t have a leg to stand on. Their only path to relevance is super ai that local models can’t run. So the “we have it but you can’t use it” is either true or a con. I bet it’s a con.

    I personally am ready to buy the drop when this bubble pops.

    • bryancoxwell 1 hour ago
      I’m not up to date on local models, but is that clear?
      • literalAardvark 1 hour ago
        Gemma4:e4b is crazy good and quite usable on 10 years old midrange hardware.

        Not sure about the security capabilities and haven't tested it all that well, as I usually just use hosted models, but I do find myself using it and it's been quite successful for parsing unstructured data, writing small focused scripts and translations.

        The fact that I retain control of the data itself makes it incredibly useful, as I work in an environment where I can't just paste internal stuff into Codex.

        But since it's run locally on a toaster testing it is out of scope for me. It takes a fairly long time to do anything.

      • le-mark 1 hour ago
        Local models are 6-12 months behind the “frontier” models. This mean anthropic, openai, and google don’t have a moat, they’re on a treadmill running to stay ahead. Treadmills don’t justify their valuation.
  • feverzsj 2 hours ago
    With subsidy gone, token price goes sky high. The biggest shit show is about to happen.
    • infecto 44 minutes ago
      I am not convinced this is the case. I know this is the popular anti-AI narrative but most enterprise users are paying for it at token rates and I have yet to see any proof that on demand is being subsidized
    • xandrius 2 hours ago
      [flagged]
      • jurgenburgen 2 hours ago
        That’s great but who will pay for all the data center debt?
        • cmiles8 2 hours ago
          The debt goes bad and those that issued the debt absorb losses. Many that went in deep lose their shirts.

          Thats how this stuff works, although there’s a whole generation that’s not seen the back side of a bubble and seems to think there’s no such thing as a downside.

          • giwook 1 hour ago
            Just their shirts?

            I'd rather lose my pants if I had to lose anything, so then I'd still be presentable for Zoom calls.

          • throwaway132448 1 hour ago
            2007 called they want their free-market philosophy back.
        • 2ndorderthought 2 hours ago
          Let them fail before it gets even worse is my take. The future is small but capable local models.
        • robohoe 2 hours ago
          The taxpayers and paying customers that’s who!
  • dk970 8 minutes ago
    [flagged]
  • builderminkyu 31 minutes ago
    [flagged]
  • SadErn 2 hours ago
    [dead]