23 comments

  • btown 2 hours ago
  • aselimov3 2 hours ago
    What are the actual guarantees that go/Rust make that Python/npm don’t? It seems like it might just be that Python/npm are juicier targets? I’m starting to try and avoid all third party packages
    • brunoborges 2 hours ago
      It is 100% up to the package manager's steward to control how ownership of packages and namespaces are granted.

      Maven Central exists for decades the amount of incidents of people stealing namespaces is minimal.

      One can't simply publish a package under the groupId "com.ycombinator" without having some way to verify that they own the domain ycombinator.com. Then, once a package is published, it is 100% immutable, even if it has malicious code in it. Certainly, that library is flagged everywhere as vulnerable.

      It baffles me that NPM for so long couldn't replicate the same guardrails as Maven Central.

      • cluckindan 1 hour ago
        How does that protect against credential theft? MFA required to sign published releases?
        • brunoborges 1 hour ago
          That is another important layer. Maven Central is not immune to credential theft. If a publisher token is stolen, an attacker may still be able to publish a malicious new version until the token is revoked or the account is suspended after reporting the problem to Sonatype.

          But in the Maven/Gradle ecosystem, most projects pin exact dependency versions. Support for version ranges and dynamic versions exist, but they are generally avoided because they hurt reproducible builds. That means a malicious new release does not automatically flow into most consumers’ builds just because it was published.

          I'd go as far to say that NPM should:

          1. Enforce scope (namespace) requirement, and require external verification (reverse DNS for example).

          2. Disable version range support out of the box. User must --enable this setting from the command line at all times.

          3. Remove support for install scripts completely. If someone wants to publish a ready-to-run software, there are plenty of other mechanisms.

          • com2kid 56 minutes ago
            > Enforce scope (namespace) requirement, and require external verification (reverse DNS for example).

            Who the heck says everyone who publishes a library has a domain? That seems absurd.

            • brunoborges 41 minutes ago
              Sonatype allows "io.github.<username>" as a valid groupId and has a process to verify ownership. I am sure other providers like GitLab can work on this.
            • radlad 48 minutes ago
              And domains can change hands legitimately.
    • jollyllama 2 hours ago
      > It seems like it might just be that Python/npm are juicier targets?

      Attackers go where the victims are. Frontend is a monoculture with the vast majority using NPM; backend, less so. This isn't an excuse for NPM, but another strike against it.

      You could also argue that the attacks make a deeper point about frontend vs backend devs, but I won't go there.

      • bichiliad 49 minutes ago
        Why would you even imply something like that?
    • lostglass 2 hours ago
      To be honest Rust has the exact same supply chain attack pattern - it's just newer and more maintained at the moment. Give it a decade.
      • marcosdumay 8 minutes ago
        Programs in Rust (or almost every other language) normally have fewer dependencies by 2 or 3 orders of magnitude.

        And that number tends to reduce even more when the ecosystem matures.

      • slopinthebag 9 minutes ago
        Supply chain attacks are available to every language and framework that uses dependencies or modules you don’t control.
      • nothinkjustai 2 hours ago
        Rust doesn’t have post install scripts
        • est31 1 hour ago
          There is build.rs, proc macros are unsandboxed, and lastly you install the binary so that you can run it. Even if the build and install were fully sandboxed, the binary could still do malicious stuff if ran.
          • drdaeman 52 minutes ago
            Even without post-install script, a malicious payload could be hiding in some function and just wait until the developer invokes `cargo run`. Not that many people audit the crates they pull into their projects.
          • nothinkjustai 21 minutes ago
            Yeah no shit, if you download malicious code from the internet and run it on your computer you will get pwned. No matter if it’s from a package manager a zip file or a submodule.

            However the current npm vulns used a post install script.

        • fabrice_d 2 hours ago
          It has build.rs that will run as soon as you compile the dependency. That's not the same thing but pretty close to a post install script: it's very likely to run.
        • deeebug 2 hours ago
        • tasn 2 hours ago
          It has build.rs, which has essentially the same problems.
    • nirvdrum 1 hour ago
      Part of the point the article makes is that most other popular languages have a comprehensive standard library. JS has an astonishingly small on. Rather than have one vetted set of libraries that ship with the language, applications either need to roll it themselves or pull from a 3rd party package repository. We've drilled NIH into people, so they tend to reach for packages. That's not necessarily a bad thing, but it often means they're pulling in more code than they need. The JS ecosystem has also favored smaller modules, so you need many of them. And everyone builds on top of that, leading to massive growth in dependency graphs. It's a huge surface area for things to go wrong, intentionally or not.

      With many other languages, you have a lot of functionality out of the box. Certainly, there have been bugs and security issues, but they're a drop in the bucket compared to what you see in the JS ecosystem. With other languages, you have a much smaller external dependency graph and the core functionality is coming from a trusted 3rd party.

      • cluckindan 1 hour ago
        What important functionality do you feel is missing from the commonly used JS environments (node and browser) that is causing people to install it as a third party dependency?

        The issue isn’t that the functionality doesn’t exist, it’s always backwards compatibility with versions where it did not yet exist.

      • apothegm 1 hour ago
        Why Python, tho, in that case? Its stdlib is quite robust. Surprisingly so in some areas.
        • saghm 1 hour ago
          I'm not convinced that Python should be the standard for package management either. Earlier this week I was trying to publish a Python package for the first time wrapping a Rust library I wrote (for use only on Linux and Python 3.12+), and it literally took me hours to get from "I have a wheel that I can import and it works on my system" to "I have published that wheel and can install the package from PyPI on the set of systems that I'm trying to support and it actually works". Everything I've heard about this indicates that the situation for Python packaging is literally better than it ever has been before with the current tooling, so I can't even imagine how bad it was for the decades before. In comparison, having literally never touched npm before, I was able to publish a wrapper around the same library and validate that it was working in maybe 10 minutes (most of which were spent from not realizing that a certain tool was failing with a vague "file not found" error because I hadn't installed npm yet).

          I'm not saying that npm is doing everything right, but I suspect that beyond the obvious low-hanging fruit that we hear about pretty consistently with npm there's probably a long tail of less obvious stuff that can be exploited that will not be specific to npm. The fundamental problems with supply-chain vulnerabilities aren't going to go away if npm magically became pip or go modules overnight.

    • panzi 2 hours ago
      Last I checked npm had 2FA for publishing, but cargo didn't. I don't think cargo is any better than npm, just not that of an attractive target.
    • cookiengineer 2 hours ago
      I suppose that go's go:generate workflow can also be abused to land a worm like the ones spreading via npm, as you can build programs that just scrape the whole hard drive for git projects and patch the go.mod dependencies there, and you could also just write this in go as a toolchain script, for example.

      NPM's achilles is the pre/postinstall step which can run arbitrary commands and shell scripts without the user having any way to intervene.

      Dependencies must be run in isolated chroot sandboxes or better, inside containers. That would be the only way to mitigate this problem, as the filesystem of the operating system must be separated from the filesystem of the development workflow.

      On top of that most host based firewalls are per-binary instead of per-cmdline. That leads to the warnings and rules relying on that e.g. "python" or "nodejs" getting network access allowlisted, instead of say "nodejs myworm.js". So firewalls in general are pretty useless against this type of malware.

      • yegle 2 hours ago
        `go:generate` is for the package provider, the command never runs when someone `go install` or `go get` the package.
        • cookiengineer 2 hours ago
          Note that the NPM worms are spreading because the package providers are developing on their libraries without them noticing a malicious dependency. It is not users/consumers spreading the worm, it is developers spreading it.

          Your mismatch is that you think in policies, not assessments here. Nothing in my normal go workflow will ask me if I want to run "curl download whatever from the internet" when I run go build.

          Though I agree with the difference in workflow, there is not a single mechanism in go catching this. go.mod files can be just patched by the worm, and/or hidden behind a /v123 folder or whatever to play shenanigans on API differences.

      • xena 2 hours ago
        go:generate is done at dev time, not at build time.
        • cookiengineer 2 hours ago
          Actually bindings are usually generated like that, at build time (though with a build cache that nobody knows how it corrupts all the time).

          Examples that come to mind: webview/webview, webkit, cilium/ebpf and most other CGo projects that I have seen.

    • raggi 1 hour ago
      none. they just have smaller target populations.
    • jiggawatts 2 hours ago
      Generally, other package managers aren't great either. Notably, crates.io / cargo has some of the same issues as NPM and the verbiage of their excuses for not fixing these problems is oddly similar.

      Something fascinating about the design and architecture of programming languages and their surrounding ecosystems is the enormous leverage that they provide to the "core team":

      For every 1 core language developer[1]...

      ... there may be 1,000 popular package developers...

      ... for which there may be 1,000,000 developers writing software...

      ... for over 1,000,000,000 users.

      This means that for every corner that is cut at the top of that pyramid, the harms are massively magnified at the lower tiers. A security vulnerability in a "top one thousand" package like log4j can cause billions of dollars in economic damage, man-centuries of remediation effort, etc.

      However, bizarrely, the funding at the top two levels is essentially a pittance! Most such projects are charities, begging for spare change with hat in hand on a street corner. Some of the most used libraries are often volunteer efforts, despite powering global e-commerce! cough-OpenSSL-cough.

      The result is that the people most empowered to fix the issues are the least funded to do so.

      This is why NPM, Crates.io, etc... flatly refuse to do even the most basic security checks like adding namespaces and verifying the identity of major publishers like Google, Microsoft, and the like.

      That's a non-zero amount of effort, and no matter how trivial to implement technically or how cheap to police, it would likely blow their tiny budget of unreliable donations.

      The exceptions to this rule are package managers with robust financial backing, such as NuGet, which gets reliable funding from Microsoft and supports their internal (for-profit!) workflows almost as much as it does external "free" users.

      "Free and open" is wonderful and all, but you get what you pay for.

      [1] Most of us can name them off the top of our heads: Guido van Rossum, Larry Wall, Kerningham & Richie, etc.

  • eranation 1 hour ago
    I know people have opinions about cooldowns, but they would have saved you from axios, tanstack, and many other recent npm supply chain attacks. If you have Artifactory / Nexus, you probably already have cooldowns, but it's easy to set up if you don't.

    Why cooldowns? Most npm (or pypi) compromises were taken down within hours, cooldowns simply mean - ignore any package with release date younger than N days (1 day can work, 3 days is ok, 7 days is a bit of an overkill but works too)

    How to set them up?

    - use latest pnpm, they added 1 day cooldown by default https://pnpm.io/supply-chain-security

    - or if you want a one click fix, use https://depsguard.com (cli that adds cooldowns + other recommended settings to npm, pnpm, yarn, bun, uv, dependabot and, I’m the maintainer)

    - or use https://cooldowns.dev which is more focused on, well, cooldowns, with also a script to help set it up locally

    All are open source / free.

    If you know how to edit your ~/.npmrc etc, you don't really need any of them, but if you have a loved one who just needs a one click fix, these can likely save them from the next attack.

    Caveat - if you need to patch a new critical CVE, you need to bypass the cooldown, but each of them have a way to do so. In the past few weeks, while I don't have hard numbers, it seems more risk has come from Software Supply Chain attacks (malicious versions pushed) than from new zero day CVEs (even in the age of Mythos driven vulnerability discovery)

    • wesselbindt 18 minutes ago
      Seems like you dropped something:

      > Disclaimer: I maintain depsguard

      • eranation 4 minutes ago
        Yikes. You are correct. Honest truth, I got a few downvotes, thought this was the cause, but you’re right. Didn’t think that it matters much, I’ll add it back. Had no idea anyone noticed. Fair enough, thanks for keeping me honest.

        Edit: added it back, inline.

    • tkel 1 hour ago
      yes, props to pnpm for adding 1 day cooldown by default in v11.
  • yangm97 22 minutes ago
    I’m using nix for managing npm dependencies in a project and it seems like I accidentally got some protection from these attacks because of the nix sandbox. Looks like I got more than I begged for.
  • joeblubaugh 1 hour ago
    There has been a lot of pain at my various jobs installing a safe global npm config on every developer machine, asking people not to disable it, checking it with mdm tools. A safer out-of-the-box configuration is long overdue.
    • tkel 1 hour ago
      Just dont use npm. Use a package manager which doesn't execute postinstall by default. The switch is incredibly simple.
      • cluckindan 1 hour ago
        Which package manager is that, and what caveats does it offer?
        • timfsu 1 hour ago
          Pnpm - installs are faster to boot. We haven’t missed anything
        • ricardo_lien 1 hour ago
          pnpm
  • 827a 2 hours ago
    There is no legitimate reason why postinstall scripts need to exist. The npm team needs to grow up and declare "starting with npm version whatever, npm will only run postinstall scripts for versions of packages published before ${today}".
    • tkel 1 hour ago
      I audited several postinstall scripts recently in popular packages. They seem to be mostly around using native binaries, downloading them, detecting if the platform is compatible, linking to it directly instead of having it bootstrapped by node, working around issues in older versions of npm, etc. Since dev toolchains (e.g. esbuild) are now being built in compiled languages and distributed as binaries via npm registry. If you are on a recent version of node/npm and a common/recent OS/platform, you should be able to disable all the postinstall scripts without legitimate issue.
      • tkel 1 hour ago
        [dead]
    • raggi 2 hours ago
      install scripts are a distraction, just like package signatures are a distraction. adding/removing either feature has no significant impact on the wormability of this package ecosystem. installed npm code is run, with nearly zero exceptions.
      • nine_k 1 hour ago
        The installed code may be run in different settings, under a different user, with different privileges. Say, it may not run in CI/CD at all, or run only with the test user's privileges.

        Postinstall scripts run at install time, with installer's privileges.

      • 827a 44 minutes ago
        > There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.
      • piperswe 1 hour ago
        A lot of it ends up bundled to run in a browser though, and doesn't end up running in Node.js
      • throwaway27448 1 hour ago
        Surely every layer of defense in depth is a distraction except the one that prevents the problem.
    • amluto 1 hour ago
      There is also not too much legitimacy to the fact that Rust packages can run unsandboxed when they build themselves.
      • adamnemecek 1 hour ago
        I feel like it's harder to hide malicious stuff in Rust build scripts.
    • Rohansi 2 hours ago
      This doesn't really fix the issue though because package code is also executed at build time and during testing. Just maybe restricts the scope a little bit.
      • 827a 59 minutes ago
        There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.

        Its childish to believe that because you can't fix everything you shouldn't fix anything. Defense in depth.

      • tkel 1 hour ago
        If you look at the last N npm worms, they all used postinstall scripts.
    • guidedlight 1 hour ago
      Security issues aside, they are a nightmare in enterprise environments where internet and OS access is heavily restricted.
    • nine_k 1 hour ago
      ...and only if you invoke it with --dangerously-run-postinstall-scripts; otherwise it will report an error if a postinstall script is found.

      This is definitely going to affect any packages that need to link to native code and/or compile shims, but these are very few.

    • akoboldfrying 1 hour ago
      With respect, post-install scripts are a total red herring. You're alarmed by them because they are code controlled by someone else that runs on your box, and they could do something bad -- yes, they are, and yes they could.

      But so is the regular code in those packages! It won't run at install time, but something in there will run -- otherwise it wouldn't have been included in the dependencies.

      Thinking that eliminating post-install scripts will have more than a momentary impact on exploitation rates is a sign of not thinking the issue through. Unfortunately the issue is much more nuanced than TFA implies -- it's not at all a case of "Let's just stop putting the wings-fall-off button next to the light switch", it's that the thing we want to prevent (other people's bad code running on our box) cannot be distinguished from the thing we want (other people's good code running on our box) without a whole lot of painstaking manual effort, and avoiding painstaking manual effort is the only reason we even consider running other people's code in the first place.

      • 827a 12 minutes ago
        > There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.
      • apf6 40 minutes ago
        The time difference does matter though. There were some recent worm attacks in NPM that spread very quickly because they used post-install. I don’t remember how long it took NPM to block the packages but it was probably around 30 minutes or so? If it wasn’t for post-install then that same attack would have a much slower spread and thus a smaller blast radius.
  • germandiago 1 hour ago
    I use C++ and Conan with my own recipes and pre-built artifacts.

    This mitigates things to a great extent.

    I do not know who thought that having your dependencies depend on the internet with a zillion users doing stuff to each package was a good idea for enterprise environments...

    It is crazy how much things can get endangered this way.

  • slopinthebag 6 minutes ago
    I think people are overlooking the fact that the javascript ecosystem is run by perpetual beginners who are probably using 5 different SAAS credential managers and still manage to check their creds into a public git repo. No wonder there are so many breaches. Rust developers otoh are typically experts and don't get pwned so easily.
  • brooksc 1 hour ago
    Thoughts and Prayers to those affected
  • dh2022 54 minutes ago
    Kudos to the author : this article read like something out of The Onion.
  • spaqin 1 hour ago
    It's a cultural issue, always feeling the urge to update to the newest possible package for things that are already working fine, without even reading the changelog to see if it's applicable. Cooldowns are only a way to force a bit of patience onto the maintainers... and they work.
    • anonzzzies 45 minutes ago
      That, and package owners updating stuff that needs no updating just to look not stale/unupdated. I can use lisp packages without changes for 15 years fine, but a js one is unmaintained! oh no! Even though it was done 15 years ago, so they add nothing, sometimes a breaking change, to up a version on npm and github and look maintained. And then everything will update.
  • p-e-w 2 hours ago
    With the recent high-profile attacks on PyPI packages, it’s no longer true that npm is the “only package manager where this regularly happens”.

    In fact, pip is much more dangerous than npm because it lacks a lockfile. uv fixes that, but adoption is proceeding at a snail’s pace.

    • godzillabrennus 2 hours ago
      UV adoption is happening, though. NPM is still the only name in town.
      • manquer 2 hours ago
        Huh ? uv is a package manager not a registry.

        In JS world there is plenty of competition for package managers pnpm/ yarn/ burn all viable alternatives to npm the package manager.

        Public registries for languages tend to coalesce around one service . Nobody wants to publish their library to 4 different registries .

    • esafak 2 hours ago
    • fragmede 2 hours ago
      I don't know about snails, but everything I'm in contact with has moved over to uv, and I can't imagine I'm the only one.
    • lateral5 8 minutes ago
      [flagged]
  • computersuck 13 minutes ago
    Do not fucking use npm. Stay the fuck away from it. Want to write JS? AI can now write vanilla JS for you with no libraries. Own your code.
  • skeledrew 1 hour ago
    No surprise here. That's what you get when you have a language/ecosystem where core devs refuse to fix fundamental flaws, cuz for them breaking backwards compatibility is the worse crime that can ever be committed. And so all that happens in JS-land will eternally be layering lipstick on the pig in the cesspool. Too afraid of going through something similar to the Python 2 -> 3 fiasco, I guess because too many web devs and site admins would be incensed at being forced to fix their broken universe; as if it isn't already broken in its current condition.
  • exabrial 2 hours ago
    I really don't understand why the npm project cannot embrace PGP as an ambulatory 'good enough' solution.
    • loloquwowndueo 2 hours ago
      The NIH mentality in the ecosystem would result in a JavaScript pgp library which itself would be an npm package and subject to supply chain attacks. lol.
      • panzi 2 hours ago
        A good part of it is already implemented in web crypto, which is supported by browsers and node. There is a chance that npm could implement something there without extra dependencies. Maybe I'm too optimistic?
    • Gigachad 2 hours ago
      Would that help? Most of these recent attacks, the attackers have gained access to the system that builds the packages. So it would have just signed the malicious build the same.
      • raggi 1 hour ago
        nope, doesn't help. signatures and removal of script points have zero net effect on the value of the target that the ecosystem has, or how easy/hard it is to write a worm. the package code gets run, this is statistically true, and the exploited developers/environments will sign packages, this is also statistically true.
    • saghm 1 hour ago
      Probably the same reason that pretty much no other package manager (or even major email provider, when email is ostensibly the most famous use-case for it) has adopted it: the UX is atrocious.
  • 7e 1 hour ago
    The answer is LLM inspection. Which, sadly, raises the cost of software, especially once evil LLMs start hiding the backdoors better. Long term the answer should be CHERI, in my opinion.
  • eulgro 1 hour ago
    These satire articles on cybersecurity are really entertaining.

    The other one a few days ago was also good: https://nesbitt.io/2026/02/03/incident-report-cve-2024-yikes...

  • joshka 36 minutes ago
    ...so far...
  • qrush 2 hours ago
    [flagged]
    • rileymat2 2 hours ago
      I read it as a comparison of the attitude of helplessness around it, not the acts themselves. So it was a bit meta, but unremarkably inoffensive.
    • mikepurvis 2 hours ago
      I don't think it's comparing them directly or arguing for equivalent seriousness. It is identifying a similarity of mindset where those who have their hands on the levers of power that could materially improve the situation act like there's nothing they can do.
    • mrandish 2 hours ago
      But it's not comparing to school shootings, it's satirizing supposedly responsible parties who continue to deny responsibility despite repeated catastrophic failures which are their responsibility.
    • p-e-w 2 hours ago
      You’re right. Major supply chain attacks affect far more people than school shootings do, and can potentially cost more lives through downstream effects.

      It’s 2026. Software is critical infrastructure for global civilization now. Lives and livelihoods depend on it working reliably. The “it’s just bits on a computer” quip has been outdated for 20 years now.

  • numbsafari 1 hour ago
    [flagged]
  • yegle 2 hours ago
    Vendorizing using git submodule should be a robust mitigation for this problem.
    • raggi 1 hour ago
      subtree is better for this case, you want to encourage actual reading before running. reading won't catch everything but it catches a lot, and the burden isn't as high as people always complain about before they try it.
    • saghm 1 hour ago
      This feels like the modern analog of the king, the mice, and the cheese. What cats do I need to bring in to eat my git submodules?