CopyFail was not disclosed to distro developers?

(openwall.com)

328 points | by ori_b 7 hours ago

14 comments

  • xeeeeeeeeeeenu 5 hours ago
    For context, the author of the linked post, Sam James, is a Gentoo developer.

    Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix. Who knows how many shared hosting providers were hacked with this.

    It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers. One would hope that the former would notify the latter, but apparently it's the responsibility of whoever finds the vulnerability.

    • john_strinlai 3 hours ago
      i have no problem with disclosing a vulnerability 30 days after its patched in the thing you reported to. (in fact, for those unaware, this is the same policy that google's project zero uses: "90+30" https://projectzero.google/vulnerability-disclosure-policy.h...)

      the real problem is:

      >It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers.

      the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.

      what should be happening, as you allude to, is a communication channel between the kernel security team and distribution maintainers. they are in a much better position to coordinate and communicate with the maintainers than random reporters are.

      the minute the patch landed in the kernel, a notification should have gone out from the kernel team to a curated list of distro security folk that communicated the importance of the patch, and that the public disclosure would be in 30 days.

      • fresh_broccoli 24 minutes ago
        >the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.

        Not "separately to every single downstream", there is the "linux-distros" mailing list for disclosures: https://oss-security.openwall.org/wiki/mailing-lists/distros

        This random blogpost from 2022 serves as a proof that disclosing kernel vulnerabilities to the distros list is a well-known practice: https://sam4k.com/a-dummys-guide-to-disclosing-linux-kernel-...

        I agree it's a shame that the process isn't more streamlined and the kernel developers aren't forwarding the reports to the distros list.

        • tptacek 22 minutes ago
          It is literally not the vulnerability researcher's problem to solve or address this.
      • staticassertion 2 hours ago
        > they are in a much better position to coordinate and communicate with the maintainers than random reporters are.

        They openly refuse to do this and have been given authority by MITRE to work against any such process.

        • john_strinlai 2 hours ago
          right, which is why it is confusing that the animosity is aimed at the reporters rather than the kernel security team.
      • ori_b 2 hours ago
        If the maintainers were unresponsive, sure -- but it seems slightly hard to buy that a responsible reporter trying to make a big splash and a good impression wouldn't first check "did this make it out to the distros?" before making sysadmin's days real shitty, even if technically they could point fingers at other parties. At which point, if they're paying paying any attention at all to what they reported, they may have realized that a mistake was made.
        • john_strinlai 2 hours ago
          its an industry standard disclosure process. 90 days after reporting, or 30 days after the patch lands, the vuln is disclosed.

          the linux kernel team is in a 10000% better position to communicate to and coordinate their downstreams. it seems completely backwards to me to suggest that the reporter should be responsible for figuring out every possible downstream and opening up separate reports to each of them.

          the kernel team should have a process/channel to say "this is important! disclosure is in 30 days" that is received by distro security teams. because this is not the first or last time the kernel will have a local privilege escalation. hoping that every reporter, forever in the future, will take the onus on themselves is a recipe for disapointment.

          • bragr 2 hours ago
            The problem is that if you make too big of a deal about a particular patch, then someone just reverse engineers the vuln from the fix and your responsible disclosure period doesn't exist anymore.

            Gentoo has to take some blame too for not keeping all the kernels they maintain patched in a timely way.

            • john_strinlai 1 hour ago
              you minimize this with the curated contact list.

              the baddies are looking at every patch anyways.

          • ori_b 2 hours ago
            Yes, it's just incompetence from everyone involved, not malice. The company making the disclosure doesn't actually care, and the kernel processes are ineffective.
            • tptacek 2 hours ago
              No, it's incompetence from everyone involved except the company making the disclosure, which, despite the fact that the existing norms are not in fact binding (like people downthread seem to believe), they followed.
              • ori_b 1 hour ago
                Really? It seems very odd to not check in on the status of the fixes, even if it's technically possible to pass the blame to other people.

                Even if the only purpose of looking at the status to make yourself look good in marketing materials, it's surprising that it didn't happen.

                • 9question1 51 minutes ago
                  `it's technically possible to pass the blame to other people` presupposes that the blame belongs to the reporter unless effort is taken to "shift" it. This is just an inaccurate worldview as many people have pointed out clearly in this discussion. If there's a vulnerability in software the blame lies with people who wrote and maintain the software, not someone who finds and discloses a vulnerability. The person who should `check in on the status of the fixes` is the person who owns the thing being fixed, which is very much the kernel and distro maintainers and not the security researcher. It is you who are willfully shifting blame to an innocent party
                • Joker_vD 1 hour ago
                  One of the reasons this unavoidable deadline was invented, is that the alternative is that one company (or all of them) can simply decide to ignore the vuln report, and then the vulnerability will stay forever undisclosed and forever out there in the wild. And prisoner's dilemma suggests that most companies would chose "do nothing" in this scenario: they don't have to do anything, and if the vuln stays undisclosed, it probably won't be exploited anyhow. Win-win!
                  • ori_b 1 hour ago
                    I'm confused. Can you explain how this applies to the current situation, where no vuln reports were submitted to the groups responsible for distributing patches?
                    • john_strinlai 1 hour ago
                      >where no vuln reports were submitted to the groups responsible for distributing patches?

                      the vulnerability report was submitted to the kernel security team and appropriate kernel maintainers. those are the people responsible for patching the kernel, which they did 30 days ago.

                      • ori_b 59 minutes ago
                        I see, may the people who are responsible for the infrastructure you depend on be less concerned about shifting blame than you are.
                        • john_strinlai 56 minutes ago
                          imagine you use a dependency in your code. like left-pad. and some vulnerability is found in left-pad.

                          is the reporter of that vulnerability responsible for finding and submitting a vulnerability report to every single piece of software that uses left-pad? all ~millions of them?

                          or do they submit the report to left-pad, get them to fix it at the source, and trust that the people relying on left-pad will update their software like they should when they see a security-relevant update is available?

      • Denvercoder9 1 hour ago
        Two things can be true simultaneously: the Linux kernel ecosystem should have done better at communicating this to their downstreams, and publicly sharing the exploit was irresponsible.

        It is not the responsibility of the initial reporter to communicate to distributions, but the fact that those responsible failed to do that, doesn't give everybody else a free pass.

        • da_chicken 1 hour ago
          No, this was already timed disclosure. This is very common and widely accepted. 90+30 is what Google Project Zero uses, for example. The security researcher has met their ethical requirements already. This is entirely on the kernel's security team for failure to communicate downstream. That is their responsibility.

          The thing is, malicous actors are already monitoring most major projects and doing either source analysis or binary analysis to figure out if changes were made to patch a vulnerability. So, as soon as you actually patch, you really need to disclose because all you're doing by not disclosing the vulnerability is handing the bad actors a free go. The black hats already know. You need to tell the white hats, too, so they can patch.

          • Denvercoder9 50 minutes ago
            I'm not advocating for delaying the disclosure at all; my point is, if you see your initial disclosure to the kernel didn't go anywhere, to be responsible is to put in a little extra effort to ensure the fix is picked up before you disclose.
            • da_chicken 33 minutes ago
              "Didn't go anywhere"? The kernel devs patched it! They patched it weeks ago! The kernel security team needs to communicate security problems in their own releases, because that is where the distros are already looking.

              Requiring the security researcher to do it is insane. Should a security researcher that identifies a vulnerability in electron.js need to identify every possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.

              • tptacek 21 minutes ago
                In the airless void of a message board thread, of course they should. What does it cost a commenter to demand that?
        • john_strinlai 1 hour ago
          >publicly sharing the exploit was irresponsible

          they did it in the established industry standard way that probably every single security researcher you can think of follows (for good reason, i would add).

          whoever did the marketing on "responsible disclosure" was a genius.

          tptacek says it much better than me: ""Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules."

          • Denvercoder9 1 hour ago
            In my world, responsibility is not just checking a box of following industry practice. Responsibility, as Wikipedia puts it on their social responsibility page, is working together with others for the benefit of the community. And yes, sometimes that's a bit larger burden than would ideally be the case. It's an imperfect world, after all -- and let's not forget the disclosure as it happened also placed a larger burden than ideal on people scrambling to patch.

            And it's not as if I'm asking for a lot of effort. One mail to the security team of a popular distro "hey, we have found this LPE that we'll release with exploit next week, it's patched upstream already in this commit, but you don't seem to have picked it up" would likely have been enough.

            • da_chicken 1 hour ago
              No.

              The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile. Look at exactly what happened with BlueHammer this month. The security researcher went full disclosure because Microsoft didn't listen to their reports.

              Disclosure is vital. It's essential. Because the truth is, if a security researcher has found it, it's extremely likely that it's already been found by either black hats or by state actors. Ignorance is not actually protection from exploitation.

              The security researcher also has a responsibility to the general public that is still actively using vulnerable software in ignorance. They need to be protected from vendor and developer negligence as well as from exploits. And the only way to protect yourself from an exploit that hasn't yet been patched is to know that it is there.

              • Denvercoder9 1 hour ago
                The situation with e.g. BlueHammer is fundamentally different: there, the only party that could act on it (Microsoft) ignored them. In this case, the parties that could act on it weren't notified at all.

                I'm also not proposing delaying the disclosure to the general public at all. They already waited 30 days with that, that's fine. Just look a bit further than your checklist of only contacting upstream, and send a mail to the distributions if they haven't picked it up a week or two before.

                • tptacek 9 minutes ago
                  Downstream vulnerability disclosure is a negotiation between the downstreams and the upstreams. It is not the job of a vulnerability researcher to map this out perfectly (or at all).
              • throw0101a 32 minutes ago
                > The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile.

                [citation needed]

                Is there any evidence that Linux distros (specifically) act in this way? Or a particular distro?

                • john_strinlai 28 minutes ago
                  >[citation needed]

                  there is ~3 decades of citations you can look at, spread out over every security mailing list, security conference, etc. that you can think of.

                  one decent start is https://projectzero.google/vulnerability-disclosure-faq.html...

                  "Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. [...] "We used this model of disclosure for over a decade, and the results weren’t particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely! We were optimistic that vendors could do better, but we weren’t seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users.

                  [...]

                  While every vulnerability disclosure policy has certain pros and cons, Project Zero has concluded that a 90-day disclosure deadline policy is currently the best option available for user security. Based on our experiences with using this policy for multiple years across thousands of vulnerability reports, we can say that we’re very satisfied with the results.

                  [...]

                  For example, we observed a 40% faster response time from one software vendor when comparing bugs reported against the same target over a 7-year period, while another software vendor doubled the regularity of their security updates in response to our policy."

                  >Linux distros (specifically) act in this way

                  carving out special exceptions based on nebulous criteria is a bad idea. 90+30 is what has been settled on, and mostly works.

                • da_chicken 24 minutes ago
                  Really?

                  Because I would call a situation where the development team fails to appreciate the severity of a security vulnerability and has an established procedure that requires the researcher and not the kernel team to communicate with downstream users is already a major failure of process. Security is not just patching the vulnerability, and it seems that the Linux kernel developers or the Linux kernel security team does not understand that.

                  This is the result of that failure.

                  If this were any other software, we'd be here with pitchforks and torches. The researcher gave the developers timed disclosure, and even waited until after the developers had patched the issue. And... it's still a problem.

        • x4132 49 minutes ago
          so what? we should never disclose anything? this will only result in companies suppressing disclosure and leaving vulnerabilities unpatched.
    • zamalek 5 hours ago
      The disclosure was more about marketing than security. From the disclosure page:

      > Is your software AI-era safe?

      > Copy Fail was surfaced by Xint Code about an hour of scan time against the Linux crypto/ subsystem. [...]

      > [Try Xint Code]

      More chaos makes their product seem even more attractive.

      • tptacek 2 hours ago
        I worked at the industry's first commercial vulnerability lab (Secure Networks) in the mid-90s, and many of my friends at the time founded X-Force. Commercial vulnerability research has always been about marketing: marketing pays for the vulnerability research. That doesn't make it any less prosocial.
      • esseph 5 hours ago
        Your advertising for them on HN would help them too, I bet.
        • jasonmp85 5 hours ago
          Does it? Now that I see their name again in this context they're blacklisted for life.
          • john_strinlai 3 hours ago
            hope you are also blacklisting google's project zero, and practically every other major player in the vulnerability reporting space, as all use roughly the same bog standard 90+30 policy.

            this was a failure of the kernel security team, and their stance on communicating security issues with their downstreams.

          • eaf7e281 4 hours ago
            Same. They do become famous, but not in a wholly positive way.
            • esseph 1 hour ago
              I used to think the context of the fame mattered. At least in the US, it does not.

              Hell, Crowdstrike is still purchased.

          • selectively 4 hours ago
            Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit. Just fyi. Be glad it was disclosed at all. Be glad a patch was available prior to release.
            • lambda 4 hours ago
              If they want to be seen as responsible rather than opportunistic, then yeah, they should do a proper coordinated disclosure.

              Sure, they have no legal obligation to disclose, but we all also have no legal obligation to buy their services. Blacklisting bad actors like this is the right move to discourage this kind of behavior.

              • john_strinlai 1 hour ago
                >they should do a proper coordinated disclosure.

                they did a proper coordinated disclosure, following the industry standard 90+30 process. that is why the exploit dropped 30 days after the patch landed.

                the kernel team should have communicated with their downstream about the importance of the patch. that is the kernel security team's responsibility -- and they are much better positioned to do that than crossing your fingers and hoping every reporter will contact every distro every single time there is a vulnerability.

                there are very good reasons disclosure works this way, backed by a couple of decades of debate about it.

              • selectively 4 hours ago
                Who cares about how you are seen when you are selling 0day for big bucks? The bad actor makes more money than the 'legitimate' one without breaking any law. Punishing someone who didn't alert distros despite a patch being available encourages the company to simply find flaws and sell them for profit - it pays more to begin with.
                • _yttw 4 hours ago
                  If they want to take advantage of disclosure for marketing, they're either going to need to accept the norms around responsible disclosure, or they're going to need to accept how shirking those norms will come off. That's life in society. Sometimes it's annoying and sometimes it doesn't feel rational, but these norms have been negotiated throughout the history of our industry and are the way they are for reasons good and bad.

                  I just don't see the point in complaining about how shirking the norms of your industry will make you look irresponsible. I don't really care that they could have decided to sell the vulnerability instead. It isn't material.

                  • tptacek 4 hours ago
                    It is absolutely not true that viable commercial vulnerability labs need to "accept the norms around responsible disclosure". There are no such norms. "Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules. It was fantastically successful at that, and it's worth pushing back on at every opportunity.

                    Tavis Ormandy dropped Zenbleed right onto Twitter. He's doing fine. You can blacklist him if you want; I imagine he's not going to notice.

                    • SCHiM 4 hours ago
                      Microsoft's policy is: "if you contact us with a vulnerability, you automatically agree to the terms of our responsible disclosure policy", which includes waiting 30 days after patch was created, and says nothing about how long that process takes.

                      There is actually no way to give them a friendly heads up, and then do your own thing. The only way not to be bound is by not sending them any notification at all...

                      • prmoustache 1 hour ago
                        Since no contract is signed, this is just pure fantasy from your part.
                      • leni536 3 hours ago
                        I wonder if "if you contact us... you automatically agree" stands in court. That's just ridiculous.
                        • tptacek 3 hours ago
                          Reader, it does not.
                    • _yttw 4 hours ago
                      You're right, they don't need to. They have an alternative, to accept what people say or think about them in response. That's what I said.
                  • selectively 4 hours ago
                    Those norms do not exist. Those are people asking companies to do stuff to benefit the person complaining for free, and many companies will not do that.
                    • _yttw 4 hours ago
                      It seems to me you're unaware of them, but there are strong norms around disclosure. They've been discussed for decades. It is the expectation that vendors would be notified in a scenario like this.
                      • selectively 4 hours ago
                        No, there are users who want those to be norms. Qualified researchers happily sell substantive vulns to people who pay (Governments/Cellebrite and companies like that) enough to quell any complaint.
                        • _yttw 4 hours ago
                          Which is again, irrelevant to the question of how disclosure works and what expectations there are around it because that is not disclosure and is not what was being discussed.
                • dirasieb 4 hours ago
                  it’s called building and preserving a high trust society, you wouldn’t understand
                  • DaSHacka 43 minutes ago
                    How does someone being incentivized to sell a vulnerability to a private organization over disclosing it publicly preserve a "high trust society"? Do you mean in the context of a "deceptively high-trust society"?

                    Those private actors aren't planning to sit around and hold onto these exploits they've horded forevermore, they're obviously paying for them so they can one day use them.

            • lrvick 2 hours ago
              Unfortunately this is correct. As a security researcher I set millions in profit on fire for reporting vulns to projects that offer no bounties vs selling to highest bidder. I keep doing it because it is the right thing to do, but I would not blame someone that needs to feed their family making a different choice.

              We must get public funds to reward ethical disclosure of big impact vulns like this.

              • selectively 55 minutes ago
                Harder and harder to get good policy like what you describe when tech-adjacent people loudly argue for criminal penalties for anything other than coordinated disclosure :(
            • bigbadfeline 2 hours ago
              > Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit. Just fyi. Be glad it was disclosed at all.

              I'm so glad these so called "researchers" aren't totally evil, I'm so grateful they're only half evil, give them a lollipop.

              Whatever, the way they disclosed it isn't much different from no disclosure at all - the exploit would have been identified in the wild and fixed soon thereafter.

              "Researchers"...

              • john_strinlai 2 hours ago
                the way the disclosed it is the industry standard. think of the biggest security research teams you know (e.g. google), and they follow the same process.

                non-security people always seem to get up in arms about it, but there is very good reasons why the industry has landed on the process it has, which has been hashed out over a few decades.

              • selectively 1 hour ago
                There are two options:

                1. Status quo. Researchers are free to disclose to a vendor, free to sell vulns to legitimate companies, free to do full disclosure if they want. This situation benefits security. Researchers are able to pay their bills while also doing meaningful research into OSS projects that are unable to fund the kind of security audit they need. Harm reduction, of sorts.

                2. Everyone is a bad actor. No one is going to do this work for free/for a bounty. Horrible flaws will be found and shared with ransomware gangs and the like. 0day will sell for a percentage of the ransom winnings. Researchers will live like kings, everyone else will suffer.

                Which do you prefer?

            • jojomodding 3 hours ago
              > are free to sell 0day for profit.

              This is not true in many jurisdictions.

              • lrvick 2 hours ago
                Anyone can sell a vuln in any jurisdiction and never be caught. Lets not pretend the law is actually worth a damn here.

                We need an anonymous bounty system.

              • selectively 42 minutes ago
                Are you claiming that if I sell 0day through a broker to the national Government of a given jurisdictions that the national Government of that jurisdiction is going to criminally penalize me?

                If so, that's a bit naive. In the actual world, that buyer wants to buy more stuff from me, not penalize me.

            • kelnos 4 hours ago
              I'm pretty sure they have a legal obligation in most jurisdictions not to sell 0days for profit.

              And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems. (I'm not saying "responsible disclosure" is the correct way to do that, but hoarding vulnerabilities and exploits and selling them to the highest bidder certainly isn't.)

              This is how society needs to work.

              • tptacek 6 minutes ago
                It is categorically false that there's a legal obligation not to sell vulnerabilities. There's an obligation not to knowingly sell them directly to ongoing criminal enterprises. That's it. Plenty of people make fuckloads of money selling vulnerabilities for exploitation rather than repair.
              • lrvick 2 hours ago
                Let me make you aware of zerodium. A broker anyone can sell vulns to, that sells to unspecified buyers you do not need to know about.
                • selectively 59 minutes ago
                  (The buyers are the NSA, the IDF, Cellebrite, NSO and its successor corporation and that kind of thing. Depends on what you are offering)

                  You'll learn who the buyers are if you routinely have the really good stuff to sell! If you are offering iOS zero click on a semi-regular basis, the buyer is going to want to try to deal with you directly and preferably offer you a more regular form of employment, if you are interested. Some national governments may offer certain benefits to you, depending on your situation.

                  All depends on what you have to offer. If you were able to offer this https://arstechnica.com/security/2025/09/microsofts-entra-id... or something of that magnitude, a lot of problems in your life would just go away. The buyers would all be Five Eyes and the intelligence gain of having that kind of access even briefly is priceless.

                  In a more Western-centric context, imagine if you had a flaw like that, same 'no logs are generated' and 'every single customer account is accessible' but the impacted vendor was Alibaba Cloud. The researcher would get to name their price. That's the real world, that's the world we share. We shouldn't be blind to that.

              • mschuster91 4 hours ago
                > I'm pretty sure they have a legal obligation in most jurisdictions not to sell 0days for profit.

                it wasn't sold for profit, it was openly disclosed.

                > And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems.

                All that "responsible disclosure" does is keep people from demanding better.

            • ux266478 3 hours ago
              mmmmmm, no it would seem like they are absolutely under a social obligation to not do that.
            • estimator7292 3 hours ago
              [flagged]
            • grayhatter 3 hours ago
              > Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit.

              Uh... no? If you mean legally, some people might, depending on jurisdiction. But also, ethically? yes, researchers are ethically obligated to disclose responsibly.

              > Just fyi.

              ...

              > Be glad it was disclosed at all. Be glad a patch was available prior to release.

              I am glad that a patch was available. Equally I can be glad that the linux community is strong enough to respond quickly, while also being angry that this person behaves unethically.

              Likewise, when people in my industry behave poorly, or unethically; I'm now the person ethically obligated to both point it out, and condemn it. Not to become an apologist demanding I should be happy watching bad things happen, when much of the fallout could have been prevented with a bit less incompetence and ignorance.

            • eschaton 4 hours ago
              They should have a legal obligation to engage in coordinated/responsible disclosure, and it should be a crime to sell or disclose a 0day to anyone other than a state-designated security organization or the vendor/provider.

              If it won’t be handled through criminal law then it’ll be handled through civil litigation: Anyone who was exploited as a result of this disclosure should sue the discloser for contributing to the damage they’ve suffered.

          • CSSer 4 hours ago
            Yes, exactly. Name and shame.
          • true_religion 4 hours ago
            Same. I did not know who they were, but now they have been named and shamed. Not every publicity is good.
    • Lammy 4 hours ago
      > It was extremely irresponsible

      As a user and admin I disagree. Makes one appreciate what a masterful bit of lexical-engineering “Responsible” Disclosure is, kinda like “Secure” (from me, not forme) Boot — “Responsible” Disclosure is 100% about reputation-management for the various corporation/foundation middleman entities sitting between me and my computer.

      Those groups don't care that my individual computer is vulnerable but about nobody being able to say “RHEL is vulnerable” or “Ubuntu is vulnerable”. The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk than to be surprised by the fix and hope nothing bad happened in that meantime.

      Immediate public disclosure is the only choice that isn't irresponsible as far as I'm concerned.

      • BeetleB 4 hours ago
        So if I found a vulnerability that lets hackers withdraw withdraw all the money in your account without a trail on where the money went, you'd be fine with them disclosing it to the public at the same time as the bank learns about it?

        Even when there is no known use case of the attack (other than the security researcher's)?

        > The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk

        By the time you hear about it, the money could be gone because 1000 hackers heard about it from the researcher before you did.

        > than to be surprised by the fix and hope nothing bad happened in that meantime.

        Hope is not a good strategy here.

        • Lammy 4 hours ago
          Yep, I'd be fine with that. My bank has insurance, and my money would be returned.
          • Dylan16807 3 hours ago
            Seeing your other (rightfully flagged) reply I want to tell you as a neutral party that yes this is missing the point of the analogy. You're basically saying "I would simply hit the brakes on the trolley". It's not that they're so hubristic they think it's impossible to legitimately disagree with their argument, it's that mentioning insurance is sidestepping their argument entirely. You're not addressing the general idea of getting hacked and suffering the consequences of the hack.
          • xorcist 2 hours ago
            Just socialize losses and all is well.

            What could possibly go wrong?

            • yesbut 8 minutes ago
              that is basically how all large companies behave anyway. socialize the losses (bailouts, layoffs, negative economic impacts in the communities they reside, etc.) and privatize the gains.
          • JamesStuff 3 hours ago
            The banks cost of insurance goes up, cost of running an account goes up, how do we correct for this? offer worse accounts to customers...
            • Lammy 2 hours ago
              Why do you assume banks would keep on doing the same old thing but paying more because of it? The cost would make them learn not to design systems where something like this hypothetical scenario was possible.
          • ryan_n 4 hours ago
            You're missing the point (not sure if you're just being dense on purpose...). If you're bank would just return the money then its not a good analogy. If someone gains root access to your machine, presumably they can do damage that can't be undone. In other words, to continue the bank analogy, they would take all your money and you would have no way of getting it back. Presumably, you would not be ok with this. And even if, for some weird reason, you were ok with that, 99.9% of all other people would not be ok with it.
            • Lammy 3 hours ago
              [flagged]
            • stonogo 3 hours ago
              Respectfully, I don't think they're missing the point. Banking, as an institution, has its flaws, but deposit insurance isn't one of them. These vulnerabilities exist whether or not they follow specific disclosure rituals, and systems should be deployed with defense-in-depth so that one privilege-escalation flaw is a recoverable event. Inventing tortured counterfactual analogies doesn't change the basic thrust of the poster's point: the account is insured, so getting drained by an attacker is not a fatal problem. Of course people should still take steps to prevent that from happening, but that doesn't mean prevention is (or should be) the only cure.
              • ryan_n 3 hours ago
                My point specifically is that some damage isn't recoverable if there's a vulnerability that gives someone root access. This makes the bank analogy inadequate in the first place. Im not trying to argue about whether deposit insurance is good or bad. Saying they would get the money back assumes the damage done to ones machine would be recoverable, which may not be the case.
              • Modified3019 3 hours ago
                My understanding is that FDIC deposit insurance only protects against bank failure, not fraudulent activity. Getting your account drained by an attacker may or may not be covered by a patchwork of other laws at various levels, and you could very well end up shit out of luck.
          • estimator7292 3 hours ago
            "I, personally am not affected, and I don't care about anyone else so therefore there are no consequences"
      • eschaton 4 hours ago
        “The choice that maximizes potential damage isn’t irresponsible, because it means I can mitigate my own systems immediately.”

        That’s what you’re saying here.

        • tptacek 4 hours ago
          They're literally just restating the argument for full disclosure security. This is one of the oldest debates in information security.
          • 0x0 4 hours ago
            The disclosure doesn't appear very "full". Looks like this was slipped into mainline linux among dozens of other mostly-irrelevant "CVEs" with nobody highlighting the fact that it is in fact dirty-cow-on-steroids.

            https://x.com/spendergrsec/status/2049566830771970483

            https://lore.kernel.org/linux-cve-announce/2026042214-CVE-20...

            Or is everyone expected to upgrade and reboot every 48 hours for all eternity and just deal with potential regressions all the time?

            I think this reflects poorly on the original reporters. If you have a weaponized 700-byte universal local root exploit script ready to go, perhaps you should coordinate with major distros for patches to be available before unleashing it on the world. No matter how "veteran" you are.

            • tptacek 3 hours ago
              Um, yes, everyone is expected to upgrade and reboot on a moment's notice. No policy or norm you come up with will change that.

              (This bug does not technically require a reboot to mitigate).

              • judemelancon 2 hours ago
                I think I must misunderstand. Are you saying that you upgrade and reboot every production system that you administer to apply each commit to the kernel (branch it's using) essentially immediately? That doesn't make sense to me for a few reasons, but I struggle to find a different reading that applies "upgrade and reboot on a moment's notice" to the "slipped into mainline linux" scenario. Kindly help me to do so.
                • tptacek 1 hour ago
                  No: your posture with respect to having to cycle servers is a super complicated subject and you address it both with process and with architecture (for instance: you can be blasé about things like CopyFail if you don't allow multitenant shared-kernel in your design in the first place). But no matter what process and design you have, if you're hosting sensitive workloads, you always have to be in a position where you can metabolize having to cycle your servers.

                  It's a category error to talk about a disclosure event like this as something that would destabilize someone's fleet operations. The Linux kernel is fallible. So is the x64 architecture. You already have to be ready to lock things down and reboot (or mitigate) at a moment's notice.

                  Remember: whatever else grumpy sysadmins have to say about this, Xint are the good guys. Contrast them with the bad guys, who have vulnerabilities just as bad as CopyFail, but aren't disclosing them at all --- you only find out about them when it's discovered they're actively be exploited. There's no patch at all. There isn't even a characterization of how they work, so that you could quickly see what to seccomp. That's the actual threat environment serious Linux shops operate in.

                  LPEs are not rare.

                  • judemelancon 1 hour ago
                    Oh, I thought you meant "everyone" in a sense including actual human persons and the devices on their home network.
                  • 0x0 1 hour ago
                    I find it curious to call someone dropping a weaponized root exploit before major distros or even LTS kernel git branches have patches ready "good guys". This could have been handled with much more grace.
                    • tptacek 50 minutes ago
                      Again: I made the actual distinction between bad guys and good guys clear. Good guys don't become bad guys simply because kernel security is an inconvenience to you.
        • akerl_ 3 hours ago
          What the heck is up with people today.

          Using quotes around something where you’re actually doing a strawman paraphrase of another commenter you disagree with is bad form.

      • tomxor 2 hours ago
        > Immediate public disclosure is the only choice that isn't irresponsible as far as I'm concerned.

        No, it's really not.

        High severity vulnerabilities are responsibly handled by quietly neutralising them with subtle patches that do not reveal the vulnerability, waiting for those patches to distribute. Then patching or removing the root cause of the vulnerability (at which point opportunists will start to notice), and finally publicly disclosing it when there are already good mitigations in place.

        Example: spectre/meltdowm mitigations.

        I've been asked to use this approach myself when reaching out to maintainers. Sometimes it's possible to directly fix the vulnerability as a "side effect" by making a legitimate adjacent change.

      • efortis 3 hours ago
        With immediate disclosure the provider can decide to shut down while it is fixed. Or to notify users and make it their decision. Or to be prepared with a diversified infra and switch over to a non-vulnerable path. e.g, BSDs are not affected by CopyFail
      • notsound 4 hours ago
        Those groups care about whether millions of computers are vulnerable, likely including your computer. If "immediate public disclosure" was done in all cases every vuln would be exploited and patches would be much lower quality. Shortening the disclosure timeline might be a good idea, 90 days is starting to feel long.
        • Lammy 4 hours ago
          Millions of computers are still vulnerable. Not-knowing about it doesn't mean the vuln isn't there :p
      • pphysch 4 hours ago
        The Venn diagram of mainstream distros and individual Linux users is virtually a circle.

        Ubuntu/RHEL is vulnerable and so are most Linux users by extension.

    • tptacek 4 hours ago
      Without taking a position on the disclosure mechanics: any hosting provider hacked with this was already playing to lose. It is not OK to run competing untrusted tenant workloads under a single shared kernel. Kernel LPEs are not rare. This was a particularly simple and portable one, but the underlying raw capability is a CNE commodity.
      • jcalvinowens 3 hours ago
        > Kernel LPEs are not rare. This was a particularly simple and portable one, but the underlying raw capability is a CNE commodity.

        I absolutely 100% agree with this and I'm glad to see somebody saying it. Any system that is one LPE away from being compromised is already insecure.

    • lifis 4 hours ago
      The Linux kernel is not usable as a security boundary, so anyone who wants to do "shared hosting" and not be hacked needs to use something else, like gVisor or firecracker VMs

      The only important system that uses it as a security boundary is Android and there is mitigated by the fact that APKs need user approval, plus strict SELinux and seccomp policy plus the GrapheneOS hardening, and in this case the mitigations succeeded (https://discuss.grapheneos.org/d/35110-grapheneos-is-protect...)

      • dawnerd 4 hours ago
        A LOT of websites are tenants on WHM/CPanel hosts. Not to mention how many agencies use it for their clients Wordpress sites.
      • watermelon0 4 hours ago
        I'm quite sure there are many application hosting providers which rely on container runtime such as runC (default runtime of containerd/Docker), and a shared kernel between users.
        • staticassertion 2 hours ago
          In a just world, those companies would be held legally accountable for negligent practices. The Linux kernel upstream has made it clear for decades that security is a dirty word.

          LPEs on Linux are obscenely commonplace.

    • shimman 5 hours ago
      Expecting people to do the right thing is a fundamental issue here. Why would you ever expect for all of vulnerabilities to be disclosed privately? There's very little actual incentive to do this.

      I'm honestly unaware of what systems could be put in place to prevent this but expecting people to always do the right thing is fantasy level thinking. I mean I bet the disclosers thought they were doing the right thing, hence why it's a bad thing to rely on.

      edit: spelling/grammer.

      • dwedge 5 hours ago
        When the exploit is an advertisement for an exploit detection company, not doing the right thing is a bad look
        • dgellow 5 hours ago
          The worst thing would be to exploit or sell it for profit. Instead of that, publicizing the exploit is closer to neutral–good in my books, that did trigger a really quick reaction from the different actors to patch their kernels and systems
          • ori_b 5 hours ago
            Imagine how much quicker the distros would have reacted if they were given a heads up a month ago. But, sure, I guess kudos to this company for not being actively criminal, and merely bumblingly incompetent and overly eager to get their marketing pitch out the door.
            • x4132 43 minutes ago
              to which distros? how do you ensure fairness? Do you report this to the maintainer of Red Star OS (north korea)?

              The kernel security team was given the heads up a month ago. At that point it is their decision.

      • egonschiele 5 hours ago
        Why don't all these distro maintainers add their own back doors, and mine crypto off our machines without our knowledge? Surely, there is some legal fine print they can add that would let them do that. There is very little incentive for them to maintain these systems, given how thankless and underpaid the work is.
      • holowoodman 5 hours ago
        I can accept (and welcome) disclosure before there are patches.

        But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal.

        And no, the proposed mitigations don't help with half of the distributions out there...

        • staticassertion 2 hours ago
          The patch was available. Upstream just doesn't communicate vulnerabilities because they have a personal dispute with distros about how to handle patching.
        • SoftTalker 5 hours ago
          AIUI the exploit was fairly low-effort once you knew the vulnerability. So publishing one probably didn't change the landscape much.
        • akerl_ 5 hours ago
          > maybe even criminal

          What’s your theory here? What crime?

          • holowoodman 3 hours ago
            Exploits are sold and used as weapons, sometimes even weapons of war. Which in many places is criminal, except under very restrictive circumstances.

            Also, all kinds of aiding and abetting.

            • akerl_ 2 hours ago
              What does that have to do with this comment thread?

              Copying from the comment I was replying to:

              > But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal

          • michaelmrose 5 hours ago
            If it's not a crime I see no reason not to work with partner nations to build responsible disclosure into a legal framework everywhere because it pretty obviously should be.
            • akerl_ 4 hours ago
              If you wanted to somehow make coordinated disclosure into a legal framework, that would be an interesting and complex project.

              But it’s not the law anywhere I’m aware of today, and I’d not support it becoming a law.

            • jodrellblank 4 hours ago
              You know companies are allowed to pay people to find vulns, and pay people bug bounties?

              Instead of that, you’d rather make the law compel free individuals to limit their speech, or to hand over their work to big companies privately, so big companies can save money?

              That doesn’t sound like a nice future, if it’s even enforceable at all.

        • wang_li 5 hours ago
          There is an alternative mitigation you can use which blacklists the function calls when the affected code is not built as a kernel module.
        • semiquaver 5 hours ago
          Patches were available for nearly a month.
          • ori_b 5 hours ago
            Basic care would involve making sure the patches had made it into the wild before ending the embargo, and nagging the relevant parties if not.

            Edit: As of this writing, most distros including Redhat, Fedora, Debian Stable, do not have patches available in the package repos, though they're being actively worked on.

            • sgjohnson 5 hours ago
              Not true, if there’s any evidence of the exploit being used in the wild, it’s much more responsible to release immediately.

              Considering that the patches have been available for a while, someone surely reversed what they were for and was actually exploiting this in the wild.

              In the age of AI, I’d argue that “responsible disclosure” is dead. Arguably even in closed source projects. Just ask Claude to do a diff between the previous version and to see whether anything fixed in there could have had security implications.

              We’re not there yet, but very soon the only way to responsibly disclose a vulnerability will be immediately.

              • ori_b 5 hours ago
                But they didn't release immediately -- they waited a month, but forgot to tell the distros, and forgot to check if waiting a month had actually lead to distros picking up the patches and shipping them.
            • semiquaver 5 hours ago
              “Made it into the wild?” Patches landed a month ago. Should they also wait until my linksys router from 2018 has a patch ready?
              • ori_b 5 hours ago
                Patches are still in the process of landing in most major distros as of the time of this writing. Most users are not able to get an update through their distro's packaging mechanisms.
              • SoftTalker 5 hours ago
                It's a local vulnerability at least. How many people do you let log in to your router?

                With the way linux is used these days, I'd guess the number of systems with untrusted local users is pretty limited. Even with shared hosting, you generally have root in your VM or container anyway. Unless this enables an escape from that?

                Still the risk that people who run "curl | bash" without care could get bitten, but usually its "curl | sudo bash" anyway...

                • sgbeal 5 hours ago
                  > Even with shared hosting, you generally have root in your VM or container

                  Lots of shared hosters don't use VMs or containers. It's some arbitrary number of people logging in to a shared system, each one with a home directory under /home/THE_USER_NAME. i've had several such hosters over the years (thankfully not right now, though).

                • sjpb 3 hours ago
                  > With the way linux is used these days, I'd guess the number of systems with untrusted local users is pretty limited

                  Things like HPC clusters are multiuser & don't entirely trust their users. If they did we wouldn't need users/groups/permissions etc in the first place.

                  • cozzyd 1 hour ago
                    Yes. Not even just HPC clusters, shared login servers are pretty common in academia. I manage several in our lab. Sure, we mostly trust the users against malice more or less but not so much against incompetence. A malicious vscode plugin would run rampant in this space.

                    And then there are users running claude-cli and friends who may just find it convenient to use a local root exploit to remove obstacles.

                • dist-epoch 4 hours ago
                  With this exploit it's trivial to jump from one container to another neighbor container. I've tried it and succeeded.

                  So containers don't protect you, only a VM.

                  • SoftTalker 4 hours ago
                    So anyone pulling a malicious dockerfile jeopardizes the host? That would be bad...
                    • ori_b 2 hours ago
                      ...no shit? Why do you think people care about this issue?
                • michaelmrose 4 hours ago
                  Local root is part of the path to escaping
            • staticassertion 2 hours ago
              That's mostly on Greg, a bit on the author.
            • GrayShade 5 hours ago
              Fedora is patched.
          • em-bee 5 hours ago
            only for versions 6.19.12 & 6.18.22. older versions (which are used in distributions) are not ready yet.
      • baggy_trough 5 hours ago
        Why wouldn't the linux security team notify the main linux distributions?
        • staticassertion 2 hours ago
          Greg and Linus do not believe in the entire concept of "vulnerabilities" in the Linux kernel and do not believe in the methods that distros use like cherry picking, therefor they typically are against issuing CVEs, scoring CVEs, describing vulnerabilities at all (if you use the word "vulnerability", your patch will be rejected), etc.

          It's fundamentally their position to not work the way that you describe.

          • baggy_trough 2 hours ago
            That doesn't really seem to map onto the situation since Greg himself released a 6.12 with the patch earlier today.
            • staticassertion 1 hour ago
              I don't know what you mean at all. I'm just repeating known kernel policy here. What does 6.12 have to do with anything?
              • baggy_trough 58 minutes ago
                What is your interpretation of why Greg KH released a version of 6.12 with this fix in it today, other than to help distributions avoid this vulnerability?
        • bluepuma77 3 hours ago
          Well, how do you define main Linux distros? Isn’t the next smaller one not receiving the info always complaining?
        • bonzini 5 hours ago
          Partly they already have enough on their plate. It's up to the reporter to pick how to handle the disclosure, and unless a specific maintainer chooses to handle it, the Linux security team clearly says they won't.

          Partly they have a strong belief that all kernel bugs are vulnerabilities and all vulnerabilities are just bugs; sometimes taken to the extreme in both ways (on one hand this case where the vulnerability is almost ignored; on the other hand, I saw cases where a VM panic that could be triggered only by a misbehaving host—which could just choose to stop executing the VM—was given a CVE).

          • staticassertion 2 hours ago
            This couldn't be more backwards. This has literally nothing to do with bandwidth. The kernel is a CNA, they are explicitly the ones to do this.

            The reason they don't is because Linus and Greg have repeatedly, publicly stated that they don't want to because they don't believe that vulnerabilities conceptually make sense for the linux kernel and they refuse to engage in the process.

          • baggy_trough 4 hours ago
            Seems a little crazy. Somebody should evaluate blast radius and do appropriate distro notifications in a case like this (I presume the impact was part of the disclosure, so not much extra work).
            • seanhunter 4 hours ago
              You know the linux kernel is a free software project right? If you think “somebody should” do a thing but you aren’t prepared to do it yourself then you should maybe ask for a full refund.
              • baggy_trough 3 hours ago
                Thank you very much, seanhunter. You hit the nail on the head there.
        • shimman 2 hours ago
          Because one of them might have an incentive to not do so. In this case it's because they want to advertise their own company.
      • skywhopper 5 hours ago
        I think it’s reasonable to expect folks in the security community who go to the trouble of creating a website detailing security vulnerabilities in specific listed software to pre-notify the security teams of that software. The CopyFail website calls out Ubuntu and Red Hat specifically, but apparently the author of the site did not inform them of the issue?

        But even if you think making unethical decisions in personal self interest is something no one should be criticized for, surely the Linux kernel team ought to have some process for notifying the top distributions of an upcoming LPE, just out of practicality.

        • semiquaver 5 hours ago
          In what sense do you believe that the reporter did not notify the security team of the relevant software? The vulnerability is in the kernel. Reporter responsibly disclosed using the kernel’s security report mechanism and waited until a patch was ready.

          Distros are downstream of kernel, that doesn’t entitle them to expect to be contacted directly by every security reporter. That’s not on them. Distros that are big enough should be plugged into the linux security team for notifications.

          Security researchers cannot be held responsible for broken lines of communication within the org charts of projects that they study. They’re providing a valuable public service already, how much more do you want?

          • michaelmrose 4 hours ago
            It is suggested that they out of an abundance of caution and 5 or 6 emails. If this is entirely to much to expect we can always help them by mandating that they spend 6 figures annually meeting a much more robust set of requirements that will include notifying all possible affected parties down to Hannah Montana Linux devs if any still exist.

            Any strategy that assumes that the rest of the world is functional or makes you personally responsible for fixing all of it is equally broken but there is a reasonable middle ground and sending a few more emails lies within it

            • semiquaver 4 hours ago

                > we can always help them by mandating that they spend 6 figures
              
              Who’s we? Mandate with what authority?

              AWS and GCP are downstream another level. Should the reporter also have worked with them? And their customers? And the customers of their customers?

              IMO this whole discussion seems like people are annoyed by the security researchers doing god’s work and wish they didn’t exist or think that they should be fully subservient to the projects and companies they are helping for free. The bugs were there before the researchers revealed them!!

          • ragall 5 hours ago
            > that doesn’t entitle them to expect to be contacted directly by the reporter

            Yes it does. That's how it's always been done and distros can ship a fix well before it ends up in a kernel release.

      • bossyTeacher 4 hours ago
        > expecting people to always do the right thing is fantasy level thinking.

        Most people in tech think like the techie in this comic strip.

        https://xkcd.com/538/

    • ebiederm 1 hour ago
      The notification happen when the fix was shipped. That people would prefer to been spoon fed only serious security issues is understandable, but not realistic.

      A large percentage of kernel fixes have the potential to be similarly bad. For some the potential isn't even realized until after the fix has shipped.

      Ever stable release GregKH says you must upgrade now, because there is something security relevant in there. This happens at least once a week.

      As for shared hosting providers it is my sense that there is always at least one local privilege escalation available to miscreants. Making shared hosting only safe if there is a certain amount of trust.

      I remember bugs that were similarly bad from my university days 30+ years ago. Has anything substantially changed?

    • CodesInChaos 3 hours ago
      > Who knows how many shared hosting providers were hacked with this.

      I'd consider a shared hoster which allows users to run their own (native) code and doesn't use VMs for tenant isolation extremely irresponsible in 2026.

      • saysjonathan 2 hours ago
        This is probably more common than you think. VMs are expensive, both in resources and cost (if you’re using something commercial). OS-level isolation (shared kernel, cgroups, namespaces) is used pervasively
    • akerl_ 5 hours ago
      Who knows how many attackers had found this vulnerability and had already been using it prior to this research finding it?
      • BeetleB 4 hours ago
        Argument from uncertainty is not a good way to reason about this.

        I could equally ask: "Who knows how many attackers learned about this vulnerability from this disclosure, and used it before the distributions fixed it?"

        • akerl_ 4 hours ago
          Yes, you could. Thats the core of my point: there is no Right way to handle vulnerability disclosure. There are many competing factors, most of them have major elements of uncertainty because you can’t know who knows what or how various projects or stakeholders will react.

          So maybe folks should take a break from the kind of armchair quarterbacking that this was “incredibly irresponsible”, as was done upthread, or that the researchers should be blacklisted for life, as a parallel commenter stated.

      • Quarrelsome 5 hours ago
        well now everyone does, so the irresponsible disclosure makes it significantly worse.
        • akerl_ 5 hours ago
          It’s your opinion that it’s irresponsible and that it makes something worse.
          • Quarrelsome 5 hours ago
            and its your opinion that it doesn't. Shall we continue stating the obvious? We are communicating using glyphs. This language is English. We are on Hacker News. This branch of the conversation is extremely unproductive.
            • akerl_ 4 hours ago
              I asked a question and you replied with a statement. Your statement didn’t frame itself as an opinion but as fact.

              The hilarious bit is that the idea that they needed to coordinate is clearly broken even in just this example. They did give prior notice to the Linux developers, who issued a patch. And they’re still getting raked over the coals in this comment page by armchair quarterbacks who have decided they needed to coordinate with specific distros. If they’d coordinated with those distros, somebody would have a pet distro that didn’t make the cut and they’d be pissed about that.

              There are risks no matter how they do it, and there will be people who are pissed no matter how they do it. Security researchers don’t owe anybody a specific methodology.

              • Quarrelsome 4 hours ago
                you seemed to suggest with your initial statement that any disclosure was acceptable as people would have been using the exploit prior to the disclosure. I don't think that's a strong argument given now the initial people who were using the exploit prior to disclosure are now joined by people who have learned of the exploit as a consequence of the disclosure happening before all the distribtions were ready.

                So I feel like the argument reduces into "why is it a problem that now anyone could exploit it, if some people were exploiting it already". Which imho isn't a sensible argument because the issue is clearly the amount of people capable of using the exploit for nefarious purposes, which has increased.

                • akerl_ 4 hours ago
                  Idk why you felt the need to use quotes to wrap something I didn’t say, and that is a pretty uncharitable attempt at reframing my question. If you wanted a quote, here’s what I’d say:

                  “Because we can’t know if there was exploitation by existing parties who had discovered the vulnerability on their own, there are upsides to disclosing earlier so that affected users can take mitigating steps and review their systems for indicators of compromise. Additionally, the more projects the researchers pull into the loop for coordinated disclosure, the higher the likelihood that they further leak the vulnerability to more attackers.”

                  • Quarrelsome 4 hours ago
                    Idk why you felt the need to use quotes to wrap something I didn’t say. Despite the fact I didn't say that, its a much more interesting argument than your original statement implies and it is unfortunate we didn't start there.

                    However the issue is that we cannot know if the attack space has been broadened or lessened as a consequence of this disclosure, because of how eager it was. If it wasn't eager then we could much more comfortable in suggesting that the attack space has probably been reduced.

                    Given the exploit had been living in the linux code base undetected for so long in the first place, I think its fair to state that disclosing the exploit prior to the distributions being ready and given the distributions are the principal attack vector of the exploit: that the researcher has made the situation worse and should reflect on their actions.

                    • akerl_ 4 hours ago
                      … I used quotes to wrap something that I was saying. I even called out that it was something I was saying, as a more accurate variant of what you’d claimed I meant.
                      • Quarrelsome 4 hours ago
                        and I prefaced my quotes with the statement "So I feel like the argument reduces into". I mean, idk what punctuation I'm supposed to use there that doesn't offend you, but I just figured we can all read words and it was clear that I wasn't saying you said that, but rather, as I read the argument it was reducable to that and I took issue with that potential reduction.

                        The idea about the available exploit space and how the actors within it might, or might not move is a much more interesting avenue of conversation and I thank you for elaborating on your initial comment. <3

                        I do however feel that its hard to be confident about whether or not the attack space has been increased or reduced as a consequence of the eager disclosure. I feel we could make the case either way.

          • pphysch 4 hours ago
            The public disclosure page has a big blue "Get the exploit" button.

            It's an advertisement for an unpatched critical exploit and apparently some kind of infosec company.

    • bombcar 3 hours ago
      The title on this post was changed to imply that only the Gentoo developer was left out - which I could believe.
    • PunchyHamster 2 hours ago
      At least thankfully workaround is one line in a file.
    • IshKebab 3 hours ago
      > Who knows how many shared hosting providers were hacked with this.

      None? Because nobody* does hosting using Linux users as a security boundary. It's not the 90s.

      * Standard HN disclaimer for people that think that some retro shell box with 10 users disproves "nobody": nobody does not literally mean exactly 0 people in this context.

    • TacticalCoder 1 hour ago
      > It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix.

      It's a total arsehole'y move to not share with open-source projects (like Debian) but for commercial vendors like Microsoft I don't give a crap.

      Now let's not get carried away either: that's a privilege escalation, so it already requires access to a local account. We're not exactly in Jia Tan "I backdoor every SSH out there if your Linux distro is using systemd" territory either.

    • 999900000999 5 hours ago
      Counterpoint. End users have a right to mitigate this issue on their systems.

      It is a really really bad look for Linux, puts a bit of water on all hype around switching from Windows.

      • roxolotl 4 hours ago
        It does? The disclosure even says the concern for single user systems is very low. If someone has access to your single user system, remote or otherwise, you’ve already lost on the sort of device people would be switching from windows to Linux on.
        • m3047 2 hours ago
          > The disclosure even says the concern for single user systems is very low.

          For single user systems (not rigorously defined, I presume it's the intersection of our two definitions which we might be talking about) the nature of the exploit is local privilege escalation, of which there could be many possible, and many mitigations / countermeasures against. This could have suddenly appeared from the ether of "unknown unknowns" for some people.

          Those people farther up the food chain still potentially have service accounts, maybe even user accounts for some purposes, perhaps "trusted" services which deliver them code which they deserialize and run once. (Have a pickle.)

          severity * impact * likelihood

          Not everyone looking to migrate from Windows 95 plans to run everything as root afterward.

          On the copy.fail site:

              echo "install algif_aead /bin/false" > /etc/modprobe.d/disable-algif.conf
              rmmod algif_aead 2>/dev/null || true
          
          Not everybody needs or wants to wait for their distro, or plans to patch their IC firmware when a config change will do.
        • 999900000999 4 hours ago
          Someone like an AI coding agent perhaps ? This is the type of thing Prompt injection was made for.

          No OS is perfect. The awkward rollout for this bug fix is proof of that.

          • Filligree 4 hours ago
            Root access does not typically add anything interesting, for a desktop system. All the valuable stuff is already owned by the single user.
      • windexh8er 4 hours ago
        Imagine an ignorant response like this from Apple? One of the most short sighted comments I've seen on HN in some time. And the double down! A true master class in misunderstanding the issue and the entire FOSS ecosystem in two sentences.
      • vhantz 4 hours ago
        As opposed to all other operating systems with no CVEs ever?
      • weavejester 4 hours ago
        Hype around switching from Windows servers?
      • ddtaylor 3 hours ago
        What happens if someone does the exploit in WSL?
      • johnbarron 4 hours ago
        >> puts a bit of water on all hype around switching from Windows.

        Said no one ever...present post excluded :-))

      • cbarnes99 4 hours ago
        You clearly have no idea how often windows has unpatched privesc exploits.
      • jasonmp85 5 hours ago
        [dead]
    • mschuster91 4 hours ago
      > Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix. Who knows how many shared hosting providers were hacked with this.

      Maybe it is irresponsible how little attention we pay to software security. Maybe, software developers of all kind should spend an entire year not developing any features at all, but fix all the tech debt of 30 years instead.

      Yes, that sounds revolutionary, but I do not see an alternative in an age where all you need to find kernel bugs of this scale with AI agents.

    • johnbarron 4 hours ago
      >> Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix.

      Maybe a decade of corporations with revenue in the billions, paying peanuts and coffee money, for critical vulnerability disclosures made it....

    • deng 5 hours ago
      > It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix.

      Yes, this was clearly a marketing stunt to promote Xint code.

      I, for one, will never use Xint code and will advise everyone to never use it. To anyone working there: enjoy your 15 minutes, I hope this backfires right in your face.

  • semiquaver 5 hours ago
    > Note that for Linux kernel vulnerabilities, unless the reporter chooses to bring it to the linux-distros ML, there is no heads-up to distributions.

    Why would they imply it is incumbent on the reporter to liaise with distributions? That seems to assume a high level of familiarity with the linux project. Vulnerability reporters shouldn’t be responsible for directly working with every downstream consumer of the linux kernel, what’s the limiting principal there? Should the reporter also be directly talking to all device manufacturers that use Linux on their machines?

    IMO reporter did more than enough by responsibly disclosing it to linux and waiting for a patch to land.

    Aren’t there people in the linux project itself with authority over and responsibility for security vulnerabilities? One would think they would be the ones notifying downstream distros…

    • aduwah 4 hours ago
      Especially since the reporter is explicitly asked not to notify the distro teams first.

      https://docs.kernel.org/process/security-bugs.html

      ```As such, the kernel security team strongly recommends that as a reporter of a potential security issue you DO NOT contact the “linux-distros” mailing list UNTIL a fix is accepted by the affected code’s maintainers and you have read the distros wiki page above and you fully understand the requirements that contacting “linux-distros” will impose on you and the kernel community. ```

      • nubinetwork 1 hour ago
        I don't get why the initial reporter should have to do that legwork. The kernel maintainers should be doing that.
      • stonogo 3 hours ago
        The kernel team has been at odds with the CVE process and the oss-security community about this stuff for many, many years now. It's a big part of why the kernel team established a CNA and started flooding CVE notifications; they don't believe that security problems are different than non-security problems, and refuse to establish norms or policies based on the idea that they are.
        • throw0101a 12 minutes ago
          > […] they don't believe that security problems are different than non-security problems, and refuse to establish norms or policies based on the idea that they are.

          They believe there is no difference being able to get root and not being able to get root? It seems to me that to-be(-root) and not-to-be(-root) are quite different.

        • IshKebab 3 hours ago
          It's such a bizarre viewpoint. I wonder when Linus will see sense.

          IMO it's pretty obviously not a view that they seriously hold, it's just one of those technical justifications people come up with to avoid admitting something they don't want to admit - in this case that Linux has a poor security track record.

          • guiambros 30 minutes ago
            Linus? You mean, the same Linus who thinks "security people are f*cking morons", and "security bugs are just bugs"?

            Linus is the reason why kernel team doesn't talk to distros. For them bugs are bugs, security related or not.

            https://lkml.iu.edu/hypermail/linux/kernel/1711.2/01701.html...

          • staticassertion 2 hours ago
            > I wonder when Linus will see sense.

            Literally never. Why would he? He's surrounded by sycophants. And we have Greg for whenever Linus isn't involved anymore, and Greg is just as boneheaded.

    • sega_sai 5 hours ago
      The reporter took time to check and mention on their website specific distributions Ubuntu/RHEL/SUSE. One would have thought reporting to security teams of at least those would be responsible.
      • semiquaver 5 hours ago
        “One” would have thought? Can you point to a written policy that says that’s how it should be?
        • happyopossum 5 hours ago
          No, nor can I point to a written policy that states one should cover one’s mouth when they cough.

          Everyone involved here failed to do the right thing, and hiding behind the lack of written words is weak sauce.

        • anikom15 5 hours ago
          The tenets of decency don’t need to be written down.
          • tob_scott_a 5 hours ago
            If you can't write it down, why would you expect it to be universal and enforceable? Different cultures exist and have different opinions on what "decency' means, after all.

            A security researcher's ethical obligations are to protect users over vendors (barring any contractual agreement in place). From what has been discussed in this thread, they meet that bar.

            Sure, they could have gone the extra mile to ensure the distros were in a good place to patch before they published the exploit. That's a kindness you can wish for, but don't disparage them for not going that extra mile. It's a bonus.

            It's also possible that it simply didn't occur to them to do so this time. There's certainly lessons to be learned either way. I don't know that the right lessons will emerge from hostility.

            • Quarrelsome 5 hours ago
              > If you can't write it down, why would you expect it to be universal and enforceable?

              and this is the problem. It used to be the case that if you were smart enough to find an exploit you were also smart enough to realise what would happen if you irresponsibly disclosed it. I guess these tools have made that pattern no longer apply.

              • true_religion 4 hours ago
                From my point of view, they told the kernel security team which is in charge of fixing this. If it’s important for them to tell other people, then it should’ve been written down and further reiterated when they made their report.

                The skills to detect code exploits is not the same as the skills to navigate an informal org chart to the satisfaction of an amorphous audience if end users (i.e. us on HN).

                That said… as they are a company that supposedly specializes in this field, and is trying to sell a product, I do believe they should do better. Right now, I don’t have much confidence in their product.

            • scragz 5 hours ago
              different cultures have different views on disclosing vulnerabilities to distros before the public?
              • embedding-shape 4 hours ago
                Yes :) The blackhatter would obviously sit on it until they can sell it or use it, the whitehatter collaborate the kernel and distros to patch, and the greyhatter argues on HN whether the latest *fail was responsible enough or not.
              • sunshowers 2 hours ago
                Yes? "Different cultures" doesn't just mean different countries; there are many cultures within infosec.
            • anikom15 4 hours ago
              There is little difference in culture here. Nearly all open source work is done in English.
    • skywhopper 5 hours ago
      The reporter made a website explicitly calling out Ubuntu, RedHat, Amazon, and SUSE but didn’t notify them, and you think that’s reasonable? That they might not have known those distributions are downstream from the kernel team?
      • Legend2440 3 hours ago
        If you notify the kernel and they ship a fix, it seems reasonable to expect that they will communicate the fix to the distros.

        I see this as an organizational failure of the Linux ecosystem. There should be better communication between distro and kernel development.

        • dweinus 26 minutes ago
          The reporter clearly knows the distro fixes have not been shipped, read their report. They chose to disclose anyway.
      • sigmar 3 hours ago
        What is the heuristic for who should get the heads up? Should they notify amazon but not google simply because they named amazon linux in the report? Seems to me the answer to my first question gets messy fast.
    • sparker72678 5 hours ago
      Sure, maybe it's not a _requirement_, but now we're all in more pain because the reporters are more interested in Fame than Safe Remediation.
      • tptacek 2 hours ago
        No, you're in more pain, but other defenders with different postures benefit from having faster and fuller disclosure.
        • throw0101a 4 minutes ago
          > No, you're in more pain, but other defenders with different postures benefit from having faster and fuller disclosure.

          Good for them. But just because some folks cannot afford 24/7 response teams and on-call personnel that doesn't make them or their systems any less important.

          Lots of non-profits and academic institutions had to scramble because of the Linux kernel team's position of non-communication to distros.

        • ori_b 1 hour ago
          Mind explaining how sitting on it a month after the patch landed is 'faster'? To my mind, that's a month where attackers could analyze commit logs, but maintainers are not acting with urgency to ship fixes.
          • tptacek 50 minutes ago
            No, I wouldn't, because my own preferences are towards immediate disclosure. Tavis Ormandy dropped Zenbleed out of the sky onto us. It wasn't comfortable, it was a scramble for us, but I don't blame Tavis for it; he made a principled call. Better that people know, than that information be concealed from them while designated elites perform a process.
            • ori_b 46 minutes ago
              I'd also prefer immediate disclosure, but I don't get how waiting a month without telling anyone is good regardless of which side you land on.
              • john_strinlai 44 minutes ago
                >I'd also prefer immediate disclosure

                wait, what?

                you are in another comment thread, of this very post, calling these reporters bumbling and incompetent for their disclosure. "merely bumblingly incompetent and overly eager to get their marketing pitch out the door" - that is your quote.

                you also said "Basic care would involve making sure the patches had made it into the wild before ending the embargo", which is the literal opposite of immediate disclosure.

                but now you are saying they should have just dropped it with no reporting at all? because that is what "immediate disclosure" means. pop up the exploit script on twitter and call it done.

    • froh 4 hours ago
      it's trivial to find out how to report a security issue like this to Linux distros.

      Google search: https://share.google/aimode/eihDKXZJy94Z5lC1p

      and it's beyond me to not think about doing this and instead exposing everyone and their neighbor to this exploit up front.

      I'm certain this is even a felony in some legislations, rightfully so.

      • dboreham 3 hours ago
        Agree it's not a good look for these folks, notwithstanding that disclosure is mostly theater.
  • whatevaa 3 hours ago
    Stop blaming the reporter. Start asking kernel to fix their process. Linux kernel is no longer a toy project, it has full time employees employed by various companies. They should have handled notifying distributions. Not some rando.
    • pkoiralap 33 minutes ago
      It's one thing to report a vulnerability, another entirely to make a crazy exploit available for any tom, dick, and harry to take and use. It was irresponsible of whoever came up with it to release it in the world without first giving major distros a head's up.
      • bell-cot 5 minutes ago
        Bashing on the reporter is pointless feel-good. This is a massive vuln. It was 4 weeks after Kernel had a patch. They had no way to know if others parties had also discovered the vuln. Lord Knows how many millions of systems could already have been rooted. The reporter is not their minion.

        If I call 911 to report a fire at an oil storage facility - and they ask me to alert the hospital, then phone the neighboring county's Sheriff Dept., and then...yeah. Either I'm way out in the sticks (and known to/trusted by the 911 operator), or else the 911 service is run by children.

    • dweinus 30 minutes ago
      No, I will. The distros and the kernel devs should be talking and moving on high sev patches, sure. But real people will have gotten hurt because the reporter didn't want to wait for that to happen. That's on them.
  • GranPC 4 hours ago
    Just for what it's worth, I just pushed an eBPF-based workaround for people who are running kernels in which AF_ALG is linked directly into the kernel and not as a module: https://github.com/Dabbleam/CVE-2026-31431-mitigation

    I am running this in production right now and it mitigates the attack, with no unexpected side-effects as far as I can see.

  • KingMachiavelli 3 hours ago
    `nosuid` and probably `nodev` should IMO be the default filesystem mount options. `/dev` is already a special devtmpfs and the initrd minimal /dev can just explicitly mount the initrd tmpfs rootfs with `dev` and `suid` if necessary.

    Letting SUID binaries just "exist" anywhere is a stupendous security issue. What if you mount some external storage medium, how are you to verify that none of the SUID binaries on that block device are malicious.

    Additionally, this exploit appears to only work if the user executing the SUID binary can also read the SUID binary. There's no reason for non-root users to have read on a SUID binary.

    NixOS does this correctly. No SUID in the normal package installation directory `/nix/store` and no package leakage outside of that no `nosuid` can safety be used on all other mountpoints. The exception is just a single-purpose `/run/wrappers.$hash` directory that safety contains executable ONLY SUID wrappers.

    • muvlon 2 hours ago
      While I hate suid as much as the next person, it's really not the problem here.

      The bug that is being exploited gives you basically arbitrary page cache poisoning. At that point it's already game over. Patching a suid program is maybe the easiest way to get a root shell from that but far from the only.

    • xorcist 1 hour ago
      The proof of concept exploit is just that. It is meant to demonstrate one attack vector only. There are many others. If your goal is to prevent the conceptual exploit only then there are many easier ways to accomplish that, such as blacklisting, that does not make you safer.

      With this vulnerability you can manipulate the page cache. You could also manipulate ld.so to hook into arbitraty system calls, or set your uid to 0, or any of another dozen or so ways to elevate your privileges.

      Mount points have nothing to do with this, even if is always a good idea to disallow suid in user writable areas and prevent reading suid files, but that's for other reasons. NixOS does nothing to fix this and is just as vulnerable as everyone else.

    • akdev1l 3 hours ago
      Without read permissions you cannot execute the binary, that would not make any sense.

      To execute the binary it needs to be read from disk and loaded into memory.

      In fact if you have read permissions but not executable permissions on a specific binary then you can still execute it by calling the linker directly /bin/ld.so.1 /path/to/binary (the linker will read and load the binary and then jump to the entry point without an exec() call)

      • aaronmdjones 1 hour ago
        > Without read permissions you cannot execute the binary

        This is not correct, as when the binary is setuid-someone-else, you are not the one executing it; they are.

          $ cat hello.c 
          
          #include <stdio.h>
          
          int main(void)
          {
              (void) puts("Hello, world!");
              return 0;
          }
          
          $ clang-21 -Weverything hello.c -o hello
          $ sudo chown root:root hello
          $ sudo chmod 4711 hello
          
          $ ls -l hello
          -rws--x--x 1 root root 16056 Apr 30 22:22 hello
          
          $ ./hello
          Hello, world!
          
          $ id
          uid=1000(aaron) gid=1000(aaron) groups=1000(aaron),27(sudo),46(plugdev),100(users)
        
        Removing world-readability from all setuid-root binaries on the system would be sufficient to kill the PoC script provided for this vulnerability. It would not be sufficient to prevent exploitation though; there are many ways to abuse the ability to write to files you have read access to in order to gain root, for example by using the vulnerability to alter the cached copy of a file in /etc/sudoers.d/, or overwrite /etc/passwd, or /etc/crontab, ... the list goes on.
        • akdev1l 1 hour ago
          interesting but in that case no point in keeping the x bit either and suid binaries should just be 4700 ?
          • aaronmdjones 57 minutes ago
            If they don't have world-execute permission, an access(2) check for executability would return negative, leading to things like shells not tab-completing it. The kernel would also deny attempting to execute it, as it is not executable for your fsuid.

              $ sudo chmod 4700 hello
              $ ./hello
              bash: ./hello: Permission denied
            
            You need execute access in order to launch it, but in order for it to run, the user it is running as (not you) needs read access; you don't.
      • tryauuum 1 hour ago
        this ld.so magic will lose the suid bit

            $ /bin/ld.so `which sudo`
            sudo-rs: sudo must be owned by uid 0 and have the setuid bit set
      • Plagman 3 hours ago
        loader
  • ectospheno 5 hours ago
    The Bleeping Computer link below mentions a potential remedy until a patch is ready.

    https://www.bleepingcomputer.com/news/security/new-linux-cop...

  • swinglock 3 hours ago
  • seniorThrowaway 4 hours ago
    Ubuntu has patches out, tested before and after patching.
  • uberduper 5 hours ago
    `initcall_blacklist` is a thing.
  • lrvick 3 hours ago
    Was not disclosed to stagex, and I expect a lot of linux distros. Thankfully we were already on kernel 7.0 so not impacted
  • ChrisArchitect 5 hours ago
  • JasonHEIN 4 hours ago
    huh somehow seeing people not using ai to work is like wow moment which i cherish a lot these days
    • lionkor 55 minutes ago
      You're likely in an echo chamber! Barely anyone I know uses AI as more than a fallible tool.
  • VladVladikoff 3 hours ago
    Hey Xint Code / tylerni7 <https://news.ycombinator.com/threads?id=tylerni7>, maybe you should improve your disclosure process as well? Maybe make it mandatory for users of your tool?
    • john_strinlai 3 hours ago
      they disclosed 30 days after the patch was merged in the thing they reported to.

      its the same disclosure policy as google's project zero, and several other major players, so you should probably be trying to ping a lot more people

      reporters should not be responsible for finding out and individually reporting to every downstream consumer. blame the kernel security team, who is in a much better position to coordinate notifications to individual distro security teams.

      • VladVladikoff 35 minutes ago
        In the original thread they admitted multiple times that they rushed it out for marketing reasons.
        • john_strinlai 33 minutes ago
          as an explanation for the misnumbered redhat version.

          the disclosure itself followed a normal timeline, which you can view at the bottom of their blog post.

    • tptacek 2 hours ago
      The security research community would run you out on a rail if you tried to take a successful research product and attach mandatory disclosure norms to it.
      • VladVladikoff 23 minutes ago
        Couldn't the product itself disclose to the vendors?
        • tptacek 10 minutes ago
          No firm in the world would use a vulnerability research product that automatically disclosed to vendors.
  • Skywalker13 1 hour ago
    I have checked all the servers (bookworm, bullseye) that I manage, and none of them have the algif_aead module loaded.

    Seems not fatal to all non-patched systems.

    • Denvercoder9 1 hour ago
      Not having the module loaded doesn't mean you're not vulnerable, the kernel loads the module on-demand when it's needed. I tried the exploit on such a system, and it worked.

      However, not having the module loaded does mean that in normal operation you don't need the module, so the proposed mitigation of disabling the module is safe in the sense that it won't disrupt anything.

      • Skywalker13 1 hour ago
        I don't know what exactly can load this module but the servers are running for many weeks and I suppose that if something will load this module, it stays loaded until the next reboot.. no ?

        I tried to rmmod on all servers and rmmod always returns `ERROR: Module algif_aead is not currently loaded`, that's why I think it's fine. Of course I take a look on https://security-tracker.debian.org/tracker/CVE-2026-31431 for the updates.

        • Denvercoder9 1 hour ago
          > I don't know what exactly can load this module

          Well, for one thing, opening an AF_ALG socket, as the exploit does.

    • TacticalCoder 1 hour ago
      > I have checked all the servers (bookworm, bullseye) that I manage, and none of them have the algif_aead module loaded.

      But only Trixie (and testing/Sid) are patched (as I type this).

      On Bookworm (and Bullseye), you want to add the module to list of blocked modules. It's a one-line change.