I can't quite make out if this is new or not. The attack vector here seems congruent with a similar exploit from a couple months ago [1]
But still might be an open threat. On the email thread Jens seems to think that this is already patched and in stable, he also points out that for this exploit to work (as written in the article) you already need escalated privileges [2] Catchy title though.
What is happening? I see multiple outages and CVEs is being reported on HN's front page. I've never seen these many security/incident related posts on HN's front page.
Some combination of reporting bias given concerns about LLM security capabilities and actual new vulnerabilities found with LLM assistance. Even if exploits and outages are unrelated to LLMs, I'm certainly thinking about whether claude could build these things (or if actors already have).
Slowly at first, and then suddenly. AI assisted anything follows this trend. As capabilities improve, new avenues become "good enough" to automate. Today is security.
i believe a good portion of the cves hitting the front page are moreso because they are ai-related (found partially/in whole by ai) and make for quick upvotes.
I would caution against thinking it's difficult for an LLM. I've used them in raw data file analysis and they are frequently shockingly good at pulling structures and meaning out of seemingly random data. Disassembled binaries already are structured, so pulling code flow out of that is easier. Mixing that with existing disassembly and inspection tooling and an LLM has what is needed to fast track this kind of vulnerability research. Point being, an LLM with the proper tools can potentially follow code flow from disassembled binaries way easier than a human.
I forgot who it was, but someone on YouTube said LLMs already work hooked up to gidra. If true it's only a matter of time once they find similar things in e.g. Windows. I'll wait half a year to a year (think of embargo) and if there still isn't such work for Windows I'll conclude that LLMs have a problem disassembling binaries.
Anyone care to share which models and which prompts actually lead to finding these kinds of vulnerabilities? Or the narrowing-down workflow that can get an LLM to discover them? Surely just telling claude "Find all vulnerabilities in this project LOL" isn't enough? I hope?
Everyone was talking about how Mythos was overblown marketing, and while it may be, they missed the forest for the trees. Capabilities have been escalating for a year now and we're at the point of widespread impact. I don't suspect we'll see a slowdown for a long time.
I agree. It is not like Mythos or other LLMs are insanely smart/superhuman. Many of these vulnerabilities could be discovered fairly easily by trained human experts as well. The problem is more that it requires an insane amount of attention and time of highly-paid experts to shake out these issues vs. an LLM that never gets tired and can analyze a large amount of code at low cost.
Linus' law was wrong because there were never enough (qualified) eyeballs to check the code. LLMs provide an ample supply of eyeballs (though it's not a benefit to open source, since proprietary developers can use the same LLMs).
Same applies to them being good enough to program, but many are so focused on source code generation that they don't get the whole picture.
Thanks to agents and tool calling, there are now business cases that can be fully described by AI tooling, the next step in microservices, serverless and what not.
Naturally with a much smaller team than what was required previously.
AI assistance was explicitly disclosed on yesterday's. Today's has Claude as one of two contributors on this GitHub Pages site at least so it's also very likely.
Agents are capable of finding this kind of stuff now and people are having a field day using them to find high-profile CVEs for fun or profit.
Yes I think people forget that cyber-war between West and East is very active, with a significant amount of attacks being committed by nation states or state-sponsored groups.
No, you can grant yourself this inside an unprivileged user namespace. `unshare -Ur capsh --print` lists the capabilities inside a user namespace and demonstrates that it has both CAP_SYS_ADMIN and CAP_NET_ADMIN.
Almost all distros allow unprivileged user namespaces, and in my opinion this is the right decision, because they're important for browser sandboxing which I think is more important than LPEs.
You're probably right, but that seems like the less important part of this. At that point you've already got an out-of-bounds write. Another comment speculated that you could use PageJack as an alternative exploit path once you have that primitive: https://news.ycombinator.com/item?id=48069623
Some codepaths do ns_capable() (must have capability in owning namespace, reachable via unprivileged user namespaces), some do capable() (must have capability in host user namespace, not reachable via user namespaces at all).
ZCRX can only be enabled by passing capable(CAP_NET_ADMIN), so you need to be privileged on the host.
Namespaces _may_ result in limits on what you can do with a capability, but a capability is global in scope.
If a kernel feature is gated on cap_sys_admin only, it doesn't matter at all what namespace it is in. Namespace support or additional constraints are not implicit and have to be added to each need.
People misunderstanding this is partially why we have this latest crop of vulnerabilities.
It is a minimal improvement due to the introduction of user namespaces and the fallout from local team convenience for Docker and thus OCI.
It is very important that you realize that any capability is a slice of superuser privileges, and there are no implicit protections, only explicit additional constraints that restrict it in reference to root.
Look at the bounding set for a normal user on a fresh install of rhel/debian based systems:
The capabilities(7)[0] man page will help you with all of those.
But capabilities are just a thread local segmentation, which grants superuser or root rights in a vertical segmented fashion.
True, if a mechanism chooses to do additional tests based on credentials(7)[1], you can run with those elevated privileges in a lower bound, but that requires implicit coding.
Add in that LSMs are suffering from both resources and upstream teams that won't provide guidance or are challenging to work with, and there are literally a hundred commands to either abuse or just ld_preload to get unrestricted userns, allowing you to get around basic controls on clone()/unshare() that may be implemented.
$ grep -ir "userns," /etc/apparmor.d/ | wc -l
100
With apparmor every single browser (firefox,chrome,msedge,etc...) as well as busybox, slack, steam, visual studio, ... all have the unrestricted user namespaces and the ability to gain the FULL set of capabilities in the bounding set.
If you run `busybox` on a debian system, note how it has nsenter and unshare, so you can't mask those and yet busybox itself is unconstrained with elevated privlages.
The TL;DR point being, don't assume that any capability() is in itself a gate, as there are so many ways even for the user nobody to gain them.
1. The privilege check in question here is capable(CAP_NET_ADMIN), so it doesn't work in user namespaces.
2. Most sandboxes (including Docker and Podman) disable creating unprivileged user namespaces inside them via seccomp. In this mode, you end up with a more secure setup than requiring a privileged process to spawn containers (for one, it massively reduces the risk of confused deputy attacks against container runtimes). You can also restrict it with ucounts (as rough of a system as that is).
3. The kernel provides this facility and the feature was added back in early 2013 (before Docker existed and long before they added user namespace support, let alone rooless containers), so I don't understand why you think this is somehow the fault of OCI? We're just making something useful out of existing kernel infrastructure. Folks have asked the kernel to provide a knob to disable unprivileged user namespaces but the maintainer has refused to do so for years (the best you get is ucounts and seccomp). I would also prefer to have such a knob (or even adding a separate ucount with configurable per-user limits) but it's not up to me.
(Disclaimer: I implemented rootless containers for runc back in the day and work on OCI, so I do have some bias here.)
1) the various projects refused even simple requests like allowing the admin to disable the —privileged flag, in the rootfull days.
2) The choice to break out CRI will zero authorization or mutations at the CRI level, while understandable to the containerd teams needs, exposed every other runtime to an unprotected alternative communication path.
3) The OCI groups refusal to provide guidance to LSM maintainers as to minimal configurations, while also handling the responsibilities of seccomp profiles to end users means only actively attacked vectors are protected and it becomes impossible for normal users to operate safely.
4) under the UNIX model it is the caller to clone/fork/unshare that must drop privileges.
5) This model was set in concrete by the OCI standards and now suffers from the frozen caveman pattern.
The capable()[0] syscall operates as one would expect for granting superior capabilities, and while the work to expand the isolation is something I am sure you are familiar with, you probably also realize that the number of entries in a default user also expanded just to support user namespaces.
But to be clear, the choices that docker/oci made are understandable from a local greedy choice perspective, it complicates the entire user space.
K8s mutating inlet controllers are a symptom of those choices.
Had a CRI contained a bounding set, enforced at a system level, especially with guidance and tools for users to use a minimal set, which they could expand on easily we would be in a better spot.
But as other projects cannot provide meaningful protections that cannot be simply bypassed by calling privileged CRIs it is also a barrier to convincing them to do the same.
Really there is a larger problem that OCI could be the leader on, but they are the ‘killer app’ and refuse to do so.
The bounding set for user capabilities is driven by containers, and while namespaces are not and never have been a security feature, this blocks their ability to have a strong security posture.
To be clear, expecting every end user to write minimal seccomp profiles is unrealistic, especially when docker prevents devs from accessing the local machine to discover what is happening. I think podman is the only machine that allows that by default.
Basically while simplifying moby/containerd/CRI is an understandable choice, the refusal to address the costs of that local optim has fallout
io-uring is a security nightmare. Constant privescs and a powerful primitive for syscall smuggling. Worth considering disabling it outright (already the case for most containers afaik).
I was reading similar comments about AF_ALG which lead to the copy-fail exploit. Could we see a trend of moving away from less used tools/modules that expand the vulnerability footprint?
We at work are currently going through the kernel modules available on Debian by default and deactivating things, yes.
And sorry, but I am ... frustrated by this. Why do my Debian 11 servers (currently upgrading, yes) have support for phone infrastructure from the 90s (ATM), or really obscure file systems like "Andrews File System" or support to run IP across amateur radios (AX.25) by default? We recently joked that we should start a pot you add a euro to whenever you find ancient discontinued tech you never heard about our systems support so we can have some nice dinner after this.
I do understand that going full Gentoo or Arch as a generally available distro is not feasible. I am also personally intimidated by compiling my own kernel with just what we need. But the amount of strange ancient things supported by default is also quite ridiculous.
You're kind of making my point. Perhaps we will see a trend where support for many things are not available by default and need to be installed as needed. Linux doesn't need to come with support for this and that (like AX.25) when it can be installed in seconds if truly needed. Doesn't OpenBSD already take this approach?
Desktop and server vulnerabilities are one thing. At least many are actively maintained and will get patched. I have a concern about all the common and cheap internet firewalls and routers that are around, running old software and kernels. Many or most will not get patched. I have some Ubiquiti boxes that are long out of support and run old kernels for instance. The hope is only that there's nothing they expose that gets hit.
I first read this from the author's posting to oss-security. Turns out that the author did agree to revise the blog post for the "admin cap for root shell" part [^0]. [^1] would probably tell more.
Interesting, I haven't tested this myself but intuitively I think that a 4 byte OOB write is plenty for a data-only attack like [PageJack](https://i.blackhat.com/BH-US-24/Presentations/US24-Qian-Page...), so I don't think hardening against the KASLR leaks discussed in OP would necessarily save you from this attack.
It's been almost half as long since the operating system under discussion as it has been since the creation of the language under the discussion, and there haven't really been any new mainstream operating systems created since then. I don't think it's nearly as obvious as you're implying that if there were a new operating system created today that C would be a good choice for it. If we're talking about non-mainstream OS's, then I'd argue there's already more than enough evidence that safer languages than C are more than capable of it[1]
Obviously the way to prevent this is by bounds checking, which is literally in the `770594e` patch. It's just a bug and they happen routinely in all languages. Since this is doing pointer arithmetic, it could just as easily happen in unsafe Rust, for example.
If static analysis could actually find these issues with a reasonable false positive rate, the companies behind them would be running them on Linux to get the publicity of having found the issues like all the AI companies are doing now. Imo the good static analysis heuristics are already built into compilers or in open source linters.
The cheap, low-hanging "fruit" lint rules have been added to today's C/C++ compilers. But these rules can be fragile, depending on what level the static analysis scan occurs - source-code-level-textual pattern matching or use of an AST/parse tree.
Possible problems within a function should be discoverable.
This particular bug would be hard to discover for a typical linter unless they knew/remembered that there are two execution paths for cleanup of a given element.
It's possible I had seen that blog post and not remembered! I was intending to reference the Onion though (and even googled to make sure I had the wording right), but seeing someone else make the same joke and forgetting is certainly something I would do
"static analysis" is usually deterministic rules you can e.g. put in CI. AI is also somewhat dynamic in that it can execute commands to try stuff out. The best AI vuln finding harnesses work that way, by essentially putting the AI inside of a fuzzer-like environment and telling it to produce a crash.
Technically, the kernel team is sufficiently competent to design and build bespoke tools for themselves. It‘s probably a question of risk assessment and priorities.
sure, but with unsafe Rust you have a very clear marking for the section of code that requires additional care and attention. it is also customary to include a "SAFETY" comment outlining why using unsafe is OK here
You actually kind of don't, I use like a zillion crates which have unsafe Rust in them and it's not like I'm sitting here reading every single line of their code. I like Rust for various reasons, but its memory safety is (imo) overstated, especially when doing low-level stuff.
Almost all rust (95%) is safe rust. You can opt out of array bounds checks with unsafe { array.get_unchecked(idx) } instead of just typing array[idx]. But I can't remember the last time I saw anyone actually do that in the wild. Its not common practice, even in most low level code.
Rust is bounds checked by default. C is not. Defaults matter because, without a convincing reason, most people program in the default way.
But one would have to explicitly choose to use unsafe Rust for this instead of ordinary safe Rust. And safe Rust has no particular difficulty writing to slots in an array or slice or vector specified by their index.
Based on the raw number of assorted crates, which has no bearing on kernel code. The more relevant question is, can a performant, cross-architecture, kernel ring-buffer be written in safe Rust?
Hubris, an embedded RTOS-like used in production by Oxide, has ~4% unsafe code in the kernel last I checked. There’s a ring buffer implementation that has one unsafe, for unchecked indexing: https://github.com/oxidecomputer/hubris/blob/master/lib/ring... (this of course does not mean that it is the one ring buffer to rule them all, but it’s to demonstrate that yes, it is at least possible to have one with minimum unsafe.)
It’s always a way lower number than folks assume. Even in spaces that have higher than average usage.
I've always had the impression that people who haven't actually tried to write low-level code in Rust to try to find out where the actual boundary of where they would need unsafe is tend not to realize how far you can push something and build safe abstractions on top of it. Almost every time I've had to wrap an unsafe API, I've been able to find a way to eliminate at least one of the invariants that are documented as needed for safety from propagating upwards, and there have been plenty of times that the specific circumstances of my use-case allowed me to eliminate it entirely.
The entirety of safe Rust is built upon unsafe Rust that's abstracted like this. The fact that you sometimes need unsafe isn't a mark against Rust, but literally the entire premise of the language and the exact problem it's designed to solve.
I doubt it, but you can probably get pretty close.
This is something a lot of people misunderstand about unsafe rust. The safe / unsafe distinction isn't at the crate level. You don't say "this entire module opts out of safety checks". Unsafe is a granular thing. The unsafe keyword doesn't turn off the borrow checker. It just lets you dereference pointers (and do a few other tricks).
Systems code written in rust often has a few unsafe functions which interact with the actual hardware. But all the high level logic - which is usually most of the code by volume - can be written using safe, higher level abstractions.
"Can all of io_uring be written in safe rust?" - probably not, no. But could you write the vast majority of io_uring in safe rust? Almost certainly. This bug is a great example. In this case, the problematic function was this one:
"unsafe Rust" is not a binary; you don't opt into it for every single line of code. Given that the entire premise behind the idea that using C instead of Rust is fine is that people should be able to pay close attention and not make mistakes like this, having the number of places you need to look be a tiny fraction of the overall code that's explicitly marked as unsafe is a massive difference from C where literally every line of the code could be hiding stuff like this.
Really? Why? I've not used Rust outside of some fairly small efforts, but I've never found a reason to reach for unsafe. So why is "nearly everyone" else using it?
Let's say you want to call win32 (or Mac) OS functions, all of a sudden you're doing all kinds of wonky pointer stuff because that's how these operating systems have been architected. Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.
And even if you do end up writing an unsafe block, that should be a massive flag that the code in said block should deserve extra comments on why it is safe, and extra unit tests on verifying that it does not blow up.
How do you know the unsafe operation is safe? What are the preconditions the code block has? Write it down, review it, test it.
Exactly; I feel like a lot of people seem to misunderstand what Rust is trying to solve. It's fundamentally not trying to make unsafe code impossible; it's making the number of places you need to audit it a tiny fraction of your codebase compared to needing to audit the entirety of a C or C++ codebase. When I'm doing code reviews, you'd better believe I'm going to spend some extra time on any unsafe block I see to figure out if it's necessary and if so, if it's actually safe safe (with the default assumption for both of those being that they're not until I can convince myself otherwise).
The thing is you can actually write quite good C code (see OpenBSD project). The power of C is that it's pragmatic. It lets you write code with you taking the full responsibility of being a responsible person. To err is human, but we developed a set of practices to handle this (by making sure the gun is unloaded and the safety is on before storing it to avoid putting holes in feet).
I like type checking and other compile time checks, but sometimes they feel very ceremonial. And all of them are inference based, so they still relies on the axiom being right and that the chain of rules is not broken somewhere. And in the end they are annotations, not the runtime algorithm.
It may, but it still requires careful annotations. So you should hope that you have not made an error there and described the wrong structure for the code.
It seems like you have this backwards. Messing up lifetimes in safe Rust can't cause unsafety; the compiler checks if the lifetimes are valid, and if they're not, you get a compiler error. You don't need to "hope" you did it right because the entire point is that you can't compile if you didn't.
On the other hand, when you're relying on your ability to "actually write quite good C code"...you'd better hope that you have not made an error there. In practice, some of the most widely used C libraries in the world still seem to have bugs like this, so I don't really understand why you'd think that's a winning strategy.
And even in those programs, only a fraction of the code in them is actually directly making calls to those APIs! Having everything else in safe code still makes it easier to audit than if the entire codebase is in C or C++.
So what? Just because you used the keyword `unsafe` to call an unsafe API does not mean that you are going to use unsafe pointer access to write to a vector.
How many systems have the relevant NICs, and followed the non-automatic setup steps in https://docs.kernel.org/networking/iou-zcrx.html, and are not running within a VM/container disabling io_uring?
This seems on the low impact end of the numerous historical io_uring issues.
So this is another CVE? Or am I misreading this one? "Copy‑fail", "DirtyFrag", now "IUrinegOnYou :)"?
Joke aside, we'll see more CVEs in the coming months, and in a sense that's good: it leaves less maneuvering room for bad actors (especially those selling them to the highest bidder).
If this many are public right now, what does that say about the dark matter of private ones? What's the typical public-private rate for this sort of thing/can someone help me calibrate my base rate expectations?
high privilege access required (CAP/NET admin), containers / sandboxing wins once again.
Can we make sandboxing the new default now? Flatpak does a good job, but we're still pretty far away for apt/yum/pacman installed packages. AppArmor was a decent step forward, but clearly not enough.
Government agencies probably already have half of these exploits in their private toolbox for years now. Finding and patching them is good, but there probably needs to be some systematic change to prevent them rather than just patching bugs when they get found.
I've seen microkernels mentioned a few times between these LPE posts and I'm curious about why. Would they be fundamentally more secure against forgetting to add bounds checking, or assuming user-provided input buffers should be writable without checking?
Yes, because as a userspace program if you forget to do bounds checking or read the wrong thing, the kernel kills the process. But if the buggy code is the kernel then there’s no protection. Microkernels aim to have as little code as required in kernel space.
As other people said in this thread: so many devices won't be patched. And that can easily lead to users and manufacturers moving away from Linux. Linux is in a glass house.
Linux is "falling apart" because it's the highest-profile open source project people can point LLM agents at to find CVEs. It'll come out the other end of this hardened by all of the attention it's getting, but the next few months/years will be... bumpy.
I do think SELinux is a good example of how robust software with poor UX/DX gets undermined by that poor UX/DX. Although I do wonder if AI can help with it?
Pray to God no one ever lets an AI agent run loose on the various leaked Windows source code dumps.
Given Windows' absurd amount of backwards compatibility, chances are pretty high that there are a lot of sleeping dragons buried inside even modern Windows 10/11 kernel and userland that date back to code and issues from the 90s - code where half the people who have worked on it probably not just have departed Microsoft but departed living in the meantime.
While true, since MinWin and OneCore that most of that code has been moved around.
Also contrary to Linux, Windows 11 (optional on W10) uses sandboxing for kernel and drivers.
Since Windows XP SP2 that Windows keeps getting mitigations, Microsoft has security teams whose day job is to attack Windows.
They are also promoting using CoPilot for C and C++ code review for some time now.
While it won't stop all attacks, it is better than the whole UNIX is safer than Windows attitude from the 90's, turns out it is a matter of how much money is into it.
Want really safe above anything else, look into Qube OS with its sandboxing over everything, or mainframe systems like Unysis ClearPath MCP, with NEWP as systems language, and managed environments.
But still might be an open threat. On the email thread Jens seems to think that this is already patched and in stable, he also points out that for this exploit to work (as written in the article) you already need escalated privileges [2] Catchy title though.
[1] https://snailsploit.com/security-research/general/io-uring-z...
[2] https://seclists.org/oss-sec/2026/q2/448
Slowly at first, and then suddenly. AI assisted anything follows this trend. As capabilities improve, new avenues become "good enough" to automate. Today is security.
1. Pick a file to seed as a starting place.
2. Ask the LLM (in an agent harness) to find a vulnerability by starting there.
3. If it claims to have found something, ask another one to create an exploit/verify it/prove it or whatever.
4. If both conclude there is a vuln, then with the latest models you almost certainly found something real.
Just run it against every file in a repo, or select a subset, or have an LLM select files with a simple "what X files look likely to have vulns?".
So basically yes, it is that simple. It's just a matter of having the money to pay for the tokens.
Linus' law was wrong because there were never enough (qualified) eyeballs to check the code. LLMs provide an ample supply of eyeballs (though it's not a benefit to open source, since proprietary developers can use the same LLMs).
Thanks to agents and tool calling, there are now business cases that can be fully described by AI tooling, the next step in microservices, serverless and what not.
Naturally with a much smaller team than what was required previously.
Agents are capable of finding this kind of stuff now and people are having a field day using them to find high-profile CVEs for fun or profit.
C code is broken - period
Am I reading this wrong or is this just a way of executing an arbitrary binary with uid=0 if you have both CAP_NET_ADMIN and CAP_SYS_ADMIN?
If you can write modprobe_path, is it really news that you can find a way to execute code?
Almost all distros allow unprivileged user namespaces, and in my opinion this is the right decision, because they're important for browser sandboxing which I think is more important than LPEs.
Some codepaths do ns_capable() (must have capability in owning namespace, reachable via unprivileged user namespaces), some do capable() (must have capability in host user namespace, not reachable via user namespaces at all).
ZCRX can only be enabled by passing capable(CAP_NET_ADMIN), so you need to be privileged on the host.
If a kernel feature is gated on cap_sys_admin only, it doesn't matter at all what namespace it is in. Namespace support or additional constraints are not implicit and have to be added to each need.
People misunderstanding this is partially why we have this latest crop of vulnerabilities.
static markdown version: https://raw.githubusercontent.com/ze3tar/ze3tar.github.io/9d...
It is very important that you realize that any capability is a slice of superuser privileges, and there are no implicit protections, only explicit additional constraints that restrict it in reference to root.
Look at the bounding set for a normal user on a fresh install of rhel/debian based systems:
Note how trivial it is to gain all of those capabilities: The capabilities(7)[0] man page will help you with all of those.But capabilities are just a thread local segmentation, which grants superuser or root rights in a vertical segmented fashion.
True, if a mechanism chooses to do additional tests based on credentials(7)[1], you can run with those elevated privileges in a lower bound, but that requires implicit coding.
Add in that LSMs are suffering from both resources and upstream teams that won't provide guidance or are challenging to work with, and there are literally a hundred commands to either abuse or just ld_preload to get unrestricted userns, allowing you to get around basic controls on clone()/unshare() that may be implemented.
With apparmor every single browser (firefox,chrome,msedge,etc...) as well as busybox, slack, steam, visual studio, ... all have the unrestricted user namespaces and the ability to gain the FULL set of capabilities in the bounding set.If you run `busybox` on a debian system, note how it has nsenter and unshare, so you can't mask those and yet busybox itself is unconstrained with elevated privlages.
The TL;DR point being, don't assume that any capability() is in itself a gate, as there are so many ways even for the user nobody to gain them.
[0] https://man7.org/linux/man-pages/man7/capabilities.7.html [1] https://man7.org/linux/man-pages/man7/credentials.7.html
2. Most sandboxes (including Docker and Podman) disable creating unprivileged user namespaces inside them via seccomp. In this mode, you end up with a more secure setup than requiring a privileged process to spawn containers (for one, it massively reduces the risk of confused deputy attacks against container runtimes). You can also restrict it with ucounts (as rough of a system as that is).
3. The kernel provides this facility and the feature was added back in early 2013 (before Docker existed and long before they added user namespace support, let alone rooless containers), so I don't understand why you think this is somehow the fault of OCI? We're just making something useful out of existing kernel infrastructure. Folks have asked the kernel to provide a knob to disable unprivileged user namespaces but the maintainer has refused to do so for years (the best you get is ucounts and seccomp). I would also prefer to have such a knob (or even adding a separate ucount with configurable per-user limits) but it's not up to me.
(Disclaimer: I implemented rootless containers for runc back in the day and work on OCI, so I do have some bias here.)
The capable()[0] syscall operates as one would expect for granting superior capabilities, and while the work to expand the isolation is something I am sure you are familiar with, you probably also realize that the number of entries in a default user also expanded just to support user namespaces.
But to be clear, the choices that docker/oci made are understandable from a local greedy choice perspective, it complicates the entire user space.
K8s mutating inlet controllers are a symptom of those choices.
Had a CRI contained a bounding set, enforced at a system level, especially with guidance and tools for users to use a minimal set, which they could expand on easily we would be in a better spot.
But as other projects cannot provide meaningful protections that cannot be simply bypassed by calling privileged CRIs it is also a barrier to convincing them to do the same.
Really there is a larger problem that OCI could be the leader on, but they are the ‘killer app’ and refuse to do so.
The bounding set for user capabilities is driven by containers, and while namespaces are not and never have been a security feature, this blocks their ability to have a strong security posture.
To be clear, expecting every end user to write minimal seccomp profiles is unrealistic, especially when docker prevents devs from accessing the local machine to discover what is happening. I think podman is the only machine that allows that by default.
Basically while simplifying moby/containerd/CRI is an understandable choice, the refusal to address the costs of that local optim has fallout
[0] https://elixir.bootlin.com/linux/v7.0.5/source/kernel/capabi...
That said, putting stuff in a docker container is kinda a light lift that cuts a bunch of attack surface.
And sorry, but I am ... frustrated by this. Why do my Debian 11 servers (currently upgrading, yes) have support for phone infrastructure from the 90s (ATM), or really obscure file systems like "Andrews File System" or support to run IP across amateur radios (AX.25) by default? We recently joked that we should start a pot you add a euro to whenever you find ancient discontinued tech you never heard about our systems support so we can have some nice dinner after this.
I do understand that going full Gentoo or Arch as a generally available distro is not feasible. I am also personally intimidated by compiling my own kernel with just what we need. But the amount of strange ancient things supported by default is also quite ridiculous.
Copy Fail [1]
Copy Fail 2: Electric Boogaloo [2]
Dirty Frag [3]
And now this...
[1]: https://copy.fail
[2]: https://github.com/0xdeadbeefnetwork/Copy_Fail2-Electric_Boo...
[3]: https://github.com/V4bel/dirtyfrag
This one is a level less severe.
The title looks like clickbait to me.
[^0]: https://www.openwall.com/lists/oss-security/2026/05/08/10
[^1]: https://www.openwall.com/lists/oss-security/2026/05/08/14
https://clang.llvm.org/docs/BoundsSafety.html
On macOS you can try it with:
Microsoft also seems to be using it (see above link regarding lib0xc).[1] https://github.com/swiftlang/llvm-project
[1]: https://hubris.oxide.computer/
Possible problems within a function should be discoverable.
This particular bug would be hard to discover for a typical linter unless they knew/remembered that there are two execution paths for cleanup of a given element.
Also nice the onion reference by op.
see https://scan.coverity.com/projects/linux for the linux-specific scan results - you need to create an account to view the reported defects.
This past couple of weeks isn't a good look for them with the releases of defects found in Linux and Firefox.
There are other free ones, I don't know if they're run as a matter of course.
Rust is bounds checked by default. C is not. Defaults matter because, without a convincing reason, most people program in the default way.
Also unsafe rust doesn't remove bounds checks. arr[idx] is bounds checked in every context.
You can opt out of array bounds checking by writing unsafe { arr.get_unchecked(idx) } . But thats incredibly rare in practice.
[1] https://cs.stanford.edu/~aozdemir/blog/unsafe-rust-syntax/
Based on the raw number of assorted crates, which has no bearing on kernel code. The more relevant question is, can a performant, cross-architecture, kernel ring-buffer be written in safe Rust?
It’s always a way lower number than folks assume. Even in spaces that have higher than average usage.
The entirety of safe Rust is built upon unsafe Rust that's abstracted like this. The fact that you sometimes need unsafe isn't a mark against Rust, but literally the entire premise of the language and the exact problem it's designed to solve.
This is something a lot of people misunderstand about unsafe rust. The safe / unsafe distinction isn't at the crate level. You don't say "this entire module opts out of safety checks". Unsafe is a granular thing. The unsafe keyword doesn't turn off the borrow checker. It just lets you dereference pointers (and do a few other tricks).
Systems code written in rust often has a few unsafe functions which interact with the actual hardware. But all the high level logic - which is usually most of the code by volume - can be written using safe, higher level abstractions.
"Can all of io_uring be written in safe rust?" - probably not, no. But could you write the vast majority of io_uring in safe rust? Almost certainly. This bug is a great example. In this case, the problematic function was this one:
At a glance, this function absolutely could have been written in safe rust. And even if it was unsafe, array lookups in rust are still bounds checked.Really? Why? I've not used Rust outside of some fairly small efforts, but I've never found a reason to reach for unsafe. So why is "nearly everyone" else using it?
So the vast majority of Rust projects involve writing at least one unsafe block? Is that really your claim?
How do you know the unsafe operation is safe? What are the preconditions the code block has? Write it down, review it, test it.
I like type checking and other compile time checks, but sometimes they feel very ceremonial. And all of them are inference based, so they still relies on the axiom being right and that the chain of rules is not broken somewhere. And in the end they are annotations, not the runtime algorithm.
Yes, which is precisely why I write in Rust, because the compiler errs less than I do.
On the other hand, when you're relying on your ability to "actually write quite good C code"...you'd better hope that you have not made an error there. In practice, some of the most widely used C libraries in the world still seem to have bugs like this, so I don't really understand why you'd think that's a winning strategy.
This seems on the low impact end of the numerous historical io_uring issues.
Interesting and important all the same.
Is it considered good pactice to publish a vulnerability not yet patched in any stable branch?
Joke aside, we'll see more CVEs in the coming months, and in a sense that's good: it leaves less maneuvering room for bad actors (especially those selling them to the highest bidder).
Can we make sandboxing the new default now? Flatpak does a good job, but we're still pretty far away for apt/yum/pacman installed packages. AppArmor was a decent step forward, but clearly not enough.
Linux is falling apart faster than it can assign these CVEs.
Given Windows' absurd amount of backwards compatibility, chances are pretty high that there are a lot of sleeping dragons buried inside even modern Windows 10/11 kernel and userland that date back to code and issues from the 90s - code where half the people who have worked on it probably not just have departed Microsoft but departed living in the meantime.
Also contrary to Linux, Windows 11 (optional on W10) uses sandboxing for kernel and drivers.
Since Windows XP SP2 that Windows keeps getting mitigations, Microsoft has security teams whose day job is to attack Windows.
They are also promoting using CoPilot for C and C++ code review for some time now.
While it won't stop all attacks, it is better than the whole UNIX is safer than Windows attitude from the 90's, turns out it is a matter of how much money is into it.
Want really safe above anything else, look into Qube OS with its sandboxing over everything, or mainframe systems like Unysis ClearPath MCP, with NEWP as systems language, and managed environments.