Configuration is backed by xml files in /etc/firewalld and /usr/lib/firewalld instead of the brittle pile of sticks that is the ufw rules files. Use the nftables backend unless you have your own reasons for needing legacy iptables.
Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.
Newer versions of firewalld gives an easy way to configure this via StrictForwardPorts=yes in /etc/firewalld/firewalld.conf.
I strongly disagree, firewall-cmd is way too complicated. I mean it's probably fine if your main job is being a firewall administrator, but for us who just need to punch a hole in a firewall as a tiny necessary prerequisite for what we actually want to do, it's just too much.
On ufw systems, I know what to do: if a port I need open is blocked, I run 'ufw allow'. It's dead simple, even I can remember it. And if I don't, I run 'ufw --help' and it tells me to run 'ufw allow'.
Firewall-cmd though? Any time I need to punch a hole in the firewall on a Fedora system, I spend way too long reading the extremely over-complicated output of 'firewall-cmd --help' and the monstrous man page, before I eventually give up and run 'sudo systemctl disable --now firewalld'. This has happened multiple times.
If firewalld/firewall-cmd works for you, great. But I think it's an absolutely terrible solution for anyone whose needs are, "default deny all incoming traffic, open these specific ports, on this computer". And it's wild that it's the default firewall on Fedora Workstation.
If you can, do not expose ports like this 8080:8080, but do this "192.168.0.1:8080:8080" so its bound to a private IP. Then use any old method expose only what you want to the world.
In my own use I have 10.0.10.11 on the vm that I host docker stuff. It doesn't even have its own public IP meaning I could actually expose to 0.0.0.0 if I wanted to but things might change in the future so it's a precaution. That IP is only accessible via wireguard and by the other machines that share the same subnet so reverse proxying with caddy on a public IP is super easy.
It's really a trap. I'm surprised they never changed the default to 127.0.0.1 instead of 0.0.0.0. So you would need to explicitly specify it, if you want to bind to all interfaces.
The reason is convenience. There would be a lot more friction if they didn't do it like this for everything other than local development.
Docker also has more traps and not quite as obvious as this. For example, it can change the private IP block its using without telling you. I got hit by this once due to a clash with a private block I was using for some other purpose. There's a way to fix it in the config but it won't affect already created containers.
By the way. While we're here. A public service announcement. You probably do NOT need the userland-proxy and can disable it.
Not that I'm aware of. Sorry.
Here's one my daemon.json files though. It tames the log file size and sets its format. And fixes the IP block so it won't change like I mentioned above.
I had the same experience (postgres/postgres on default port). It took me a few days to find out, because the affected database was periodically re-built from another source. I just noticed that for some periods the queries failed until the next rebuild.
one thing I always forget about, is that you have a whole network of 127.0.0.0/8 , not just one IP.
So you can create multiple addresses with multiple separate "domains" mapped statically in /etc/hosts, and allow multiple apps to listen on "the same" port without conflicts.
I never thought of using localhost like that, I'm surprised that works actually. Typically, if you want a private /8 you would use 10.0.0.0/8 but the standard 192.168.0.0/16 gives you a lot of address space ( 255^2 - 2 IPs (iirc) ) too.
..actually this is very weird. Are you saying you can bind to 127.0.0.2:80 without adding a virtual IP to the NIC? So the concept of "localhost" is really an entire class A network? That sounds like a network stack bug to me heh.
edit: yeah my route table on osx confirms it. very strange (at least to me)
I keep reading comments by podman fans asking to drop Docker and yet every time I have tried to use podman it failed on me miserably. IMHO it would be better if podman was not designed and sold as a docker drop in replacement but its own thing.
That sucks, I never had any problem running a Dockerfile in podman. I don't know what I do differently, but I would as a principle filter out any container that messes with stuff like docker in docker. Podman doesn't need these kind of shenegians.
Also the Docker Compose tool is a well-know exception to the compatibility story. (There is some unofficial podman compose tool, but that is not feature complete and quadlets are better anyway.)
I agree with approaching podman as its own thing though. Yes, you can build a Dockerfile, but buildah lets you build an efficient OCI image from scratch without needing root. For those interested, this document¹ explains how buildah compares to podman and docker.
There's a real dearth of blog posts explaining how to use quadlets for the local dev experience, and actually most guides I've seen seem to recommend using podman/Docker compose. Do you use quadlets for local dev and testing?
I just use Podman's Kubernetes yaml as a compose substitute when running everything locally. This way it is fairly similar to production. Docker compose seems very niche to me now.
podman is not a drop-in replacement for Docker. You can replace it with podman but expect to encounter minor inconsistencies and some major differences, especially if you use Docker Compose or you want to use podman in rootless mode. It's far from just being a matter of `alias docker=podman`.
The only non-deprecated way of having your Compose services restart automatically is with Quadlet files which are systemd unit files with extra options specific to containers. You need to manually translate your docker-compose.yml into one or more Quadlet files. Documentation for those leaves a lot to be desired too, it's just one huge itemized man page.
Rootless exists in Docker, yes, but as OP said, it's not first-class. The setup process is clunky, things break more often. In podman it just works, and podman is leading with features like quadlets, which make docker services just services like any other.
That page does not address rootless Docker, which can be installed (not just run) without root, so it would not have the ability to clobber firewall rules.
it doesn't matter what netfilter frontend you use if you allow outbound connections from any binary.
In order to stop these attacks, restrict outbound connections from unknown / not allowed binaries.
This kind of malware in particular requires outbound connections to the mining pools. Others downloads scripts or binaries from remote servers, or try to communicate with their c2c servers.
On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.
> On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.
Sadly that's more of a duck tape or plaster, because any serious malware will launch their scripts with the proper '/bin/bash /path/to/dropped/payload' invocation. A non-exec mount works reasonably well only against actual binaries dropped into the paths, because it's much less common to launch them with the less known '/bin/ld.so /path/to/my/binary' stanza.
I've also at one time suggested that Debian installer should support configuring a read-only mount for /tmp, but got rejected. Too many packaging scripts depend on being able to run their various steps from /tmp (or more correctly, $TMPDIR).
I agree. That's why I said that it's also useful. It won't work in all scenarios, but in most of the cryptomining attacks, files dropped to /tmp are binaries.
Personally I find just using nftables.conf straightforward enough that I don't really understand the need for anything additional. With iptables, it was painful, but iptables has been deprecated for a while now.
Same here, I'm surprised most linux users I know like to install firewalld, UFW, or some other overlaying firewall rather than just editing the nftables config directly. It's not very difficult, although I've never really dug deep into the weeds of iptables. I suspect many people who have used iptables long ago in the past assume nftables is samilar and avoid interacting with it directly out of habit.
Do both. Using provider's firewall service adds another level of defence. But hiccups may occur and firewall rules may briefly disappear (sync issues, upgrades, vm mobility issues) and you services then may become exposed. Happened to me in the past, were "lucky" enough so no damage was taken.
The problem with firewalld is that it has the worst UX of any program I know. Completely unintuitive options, the program itself doesn’t provide any useful help or hint if you get anything wrong and the documentation is so awful you have to consult the Red Hat manuals that have thankfully been written for those companies that pay thousands per month in support.
It’s not like iptables was any better, but it was more intuitive as it spoke about IPs and ports, not high-level arbitrary constructs such as zones and services defined in some XML file. And since firewalld uses iptables/nftables underneath, I wonder why do I need a worse leaky abstraction on top of what I already know.
Coming from FreeBSD and pf, all Linux firewalls I’ve tried feels clunky _at best_ UX-wise.
I’d love a Linux firewall configured with a sane config file and I think BSD really nailed it. It’s easy to configure and still human readable, even for more advanced firewall gateway setups with many interfaces/zones.
A have no doubt that Linux can do all the same stuff feature-wise, but oh god the UX :/
I have been using for many decades both Linux and FreeBSD, on many kinds of computers.
When comparing Linux with FreeBSD, I probably do not find anything more annoying on Linux than its networking configuration tools.
While I am using Linux on my laptops and desktops and on some servers with computational purposes, on the servers that host networking services I much prefer FreeBSD, for the ease of administration.
Yeah, I'm already using nftables and I agree that it's better than eg. iptables (or the numerous frontends for iptables) and probably the best bet we have at this point - but honestly, it's still far from the UX I get from pf - unfortunately :/
Hate it as well. Why should I bother learning about zones and abstract away ports, adresses, interfaces etc. only to find out pretty soon that my baremetal server actually always needs fine grained rules at least from the firewalld's point of view.
Do either of them work for container-to-container traffic?
Imagine container A which exposes tightly-coupled services X and Y. Container B should be able to access only X, container C should be able to accesd only Y.
For some reason there just isn't a convenient way to do this with Docker or Podman. Last time I looked into it, it required having to manually juggle the IP addressed assigned to the container and having the service explicitly bind to it - which is just needlessly complicated. Can firewalls solve this?
I can't answer your question about Docker or Podman, but in Kubernetes there is the NetworkPolicy API which is designed for exactly this use-case. I'm sure it uses Linux native tooling (iptables, nftables, etc) under the hood, so it's at least within the real of feasible that those tools can be used for this purpose.
I’ll just mention Foomuuri here. Its bit of a spiritual successor to shorewall and has firewalld emulation to work with tools compatible with firewalld
Thanks! Would be cool to have it packaged for alpine since firewalld requires D-Bus. There is awall but that's still on iptables and IMO at bit clunky to set up.
> Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.
This sounds like great news. I followed some of the open issues about this on GitHub and it never really got a satisfactory fix. I found some previous threads on this "StrictForwardPorts": https://news.ycombinator.com/item?id=42603136.
UFW and Firewall-CMD both just use iptables in that context though. The real upgrade is in switching to nftables. I know I'm going to need to learn eBpf as the next step too, but for now nftables is readable and easy to grok especially after you rip out the iptables stuff, but technically nftables is still using netfilter.
And ufw supports nftables btw. I think the real lesson is write your own firewalls and make them non-permissive - then just template that shit with CaC.
So this is part of the "React2Shell" CVE-2025-55182 issue? I find it interesting that this seems to get so little publicity. Almost like the issue is normal or expected. And it looks like the affected versions go back a little over a year. So if you've deployed anything with Next.js over the last 12 months your web app is now probably part of a million node bot net. And everyone's advice is just "use docker" or "install a firewall".
I'm not even sure what to say, or think, or even how to feel about the frontend ecosystem at this point. I've been debating on leaving the whole "web app" ecosystem as my main employment ventures and applying to some places requiring C++. C++ seems much easier to understand than what ever the latest frontend fad is. /rant
Frontend churn has chilled out so much over the last few years. The default webapp stack today has been the same for 5 years now, next.js (9yo) react (12yo) tailwind (8yo) postgres (36yo). I'm not endorsing this stack, it just seems to be the norm now.
Compare that to what we had in the late 00's and early 10's we went through prototype -> mootools -> jquery -> backbone -> angularjs -> ember -> react, all in about 6 years. Thats a new recommended framework every year. If you want to complain about fads and churn, hop on over to AI development, they have plenty.
You can write web apps without touching the hottest JS framework of the week. I've never touched these frameworks that try to blur the line between frontend and backend.
Pick a solid technology (.NET, Java, Go, etc...) for the backend and use whatever you want for your frontend. Voila, less CVEs and less churn!
I'm hearing about it like crazy because I deployed around 100 Next frontends in that time period. I didn't use server components though so I'm not affected.
> If your app’s React code does not use a server, your app is not affected by this vulnerability. If your app does not use a framework, bundler, or bundler plugin that supports React Server Components, your app is not affected by this vulnerability.
Just a note - you can very much limit cpu usage on the docker containers by setting --cpus="0.5" (or cpus:0.5 in docker compose) if you expect it to be a very lightweight container, this isolation can help prevent one roudy container from hitting the rest of the system regardless of whether it's crypto-mining malware, a ddos attempt or a misbehaving service/software.
Many fail if you do it without any additional configuration. In Kubernetes you can mostly get around it by mounting `emptyDir` volumes to the specific directories that need to be writable, `/tmp` being a common culprit. If they need to be writable and have content that exists in the base image, you'd usually mount an emptyDir to `/tmp` and copy the content into it in an `initContainer`, then mount the same `emptyDir` volume to the original location in the runtime container.
Unfortunately, there is no way to specify those `emptyDir` volumes as `noexec` [1].
I think the docker equivalent is `--tmpfs` for the `emptyDir` volumes.
Readonly and rootless are my two requirements for Docker containers. Most images can't run readonly because they try to create a user in some startup script. Since I want my UIDs unique to isolate mounted directories, this is meaningless. I end up having to wrap or copy Dockerfiles to make them behave reasonably.
Having such a nice layered buildsystem with mountpoints, I'm amazed Docker made readonly an afterthought.
Yeah agreed. I use docker-compose. But it doesn't help if the Docker images try to update /etc/passwd, or force a hardcoded UID, or run some install.sh at runtime instead of buildtime.
While this is a good idea I wonder if doing this could allow the intrusion to go undetected for longer - how many people/monitoring systems would notice a small increase in CPU usage compared to all CPUs being maxed out.
This is true, but it's also easy to set at one point and then later introduce a bursty endpoint that ends up throttled unnecessarily. Always a good idea to be familiar with your app's performance profile but it can be easy to let that get away from you.
The other thing to note is that docker is for the most part, stateless. So if you're running something that has to deal with questionable user input (images and video or more importantly PDFs), is to stick it on its own VM and then cycle the docker container every hour and the VM every 12, and then still be worried about it getting hacked and leaking secrets.
If I can get in once, I can do it again an hour later. I'd be inclined to believe that dumb recycling is not very effective against a persistent attacker.
Most of this is mitigated by running docker in an LXC containers (like proxmox does) which grants a lot more isolation than docker on it's own - closer in nature to running separate VMs.
Illumos had a really nice stack for running containers inside jails and zones... I wonder if any of that ever made it into the linux world. If you broke out of the container you'd just be inside a jail which is even more hardened.
No firewall! Wow that's brave. Hetzner will let you configure one that runs outside of the box so you might want to add that too, as part of your defense in depth - that will cover you if you make a mistake with ufw. Personally I keep SSH firewalled only to my home address in this way; if I'm out and about and need access, I can just log into Hetzner's website and change it temporarily.
Firewalls in the majority of cases don't get you much. Yes it's a last line of defense if you do something really stupid and don't even know where or what you configure your services to listen on, but if you don't the difference between running firewalls and not is minuscule.
There are way more important things like actually knowing that you are running software with widely known RCE that don't even use established mechanisms to sandbox themselves it seems.
The way the author describes docker being the savior appears to be sheer luck.
The author mentioned they had other services exposed to the internet (Postgres, RabbitMQ) which increases their attack surface area. There may be vulnerabilities or misconfigurations in those services for example.
But if they have to be exposed then a firewall won't help, and if they don't have to be exposed to the internet then a firewall isn't needed either, just configure them not to listen on non-local interfaces.
I'm not sure what you mean, what sounds dangerous to me is not caring about what services are listening to on a server.
The firewall is there as a safeguard in case a service is temporarily misconfigured, it should certainly not be the only thing standing between your services and the internet.
If you're at a point where you are exposing services to the internet but you don't know what you're doing you need to stop. Choosing what interface to listen on is one of the first configuration options in pretty much everything, if you're putting in 0.0.0.0 because that's what you read on some random blogspam "tutorial" then you are nowhere near qualified to have a machine exposed to the internet.
But the firewall wouldn't have saved them if they're running a public web service or need to interact with external services.
I guess you can have the appserver fully firewalled and have another bastion host acting as an HTTP proxy, both for inbound as well as outbound connections. But it's not trivial to set up especially for the outbound scenario.
No you're right, I didn't mean the firewall would have saved them, but just as a general point of advice. And yes a second VPS running opnSense or similar makes a nice cheap proxy and then you can firewall off the main server completely. Although that wouldn't have saved them either - they'd still need to forward HTTP/S to the main box.
A firewall blocking outgoing connections (except those whitelisted through the proxy) would’ve likely prevented the download of the malware (as it’s usually done by using the RCE to call a curl/wget command rather than uploading the binary through the RCE) and/or its connection to the mining server.
In practice, this is basically impossible to implement. As a user behind a firewall you normally expect to be able to open connections with any remote host.
In this model, hosts don’t need any direct internet connectivity or access to public DNS. All outbound traffic is forced through the proxy, giving you full control over where each host is allowed to connect.
It’s not painless: you must maintain a whitelist of allowed URLs and HTTP methods, distribute a trusted CA certificate, and ensure all software is configured to use the proxy.
Correct. From what I understand, Shodan has had for years a search feature in their paid plans to query for "service X listening on non-standard port". The only sane assumption is that any half-decent internet-census[tm] tool has the same as standard by now.
For the record this is only available for their VPS offering and not dedis. If you rent a dedi through their server auction you still need to configure your own firewall.
I have SSH blocked altogether and use wireguard to access the server. If something goes wrong I can always go to the dashboard and reenable SSH for my IP. But ultimately your setup is just as secure. Perhaps a tiny bit less convenient.
Yup. All my servers are behind Tailscale. The only thing I expose is a load balancer that routes tcp (email) and http. That balancer is running docker, fully firewalled (incl docker bypasses). Every server is behind herzner’s firewall in addition to the internal firewall.
App servers run docker, with images that run a single executable (no os, no shell), strict cpu and memory limits. Most of my apps only require very limited temporary storage so usually no need to mount anything. So good luck executing anything in there.
I used, way back in the day, to run Wordpress sites. Would get hacked monthly every possible way. Learned so much, including the fact that often your app is your threat. With Wordpress, every plugin is a vector. Also the ability to easily hop into an instance and rewrite running code (looking at you scripting languages incl JS) is terrible. This motivated my move to Go. The code I compiled is what will run. Period.
The only way I've envisioned fail2ban to be of any use at all is if you gather IPs from one server and use them on your whole fleet and I got it running like this for a while. Ultimately I decided that all it does is give you a cleaner log file since by definition its working on logs for attacks/attempts that did not succeed. We need to stop worrying about attempts we see in the logs and let software do its job.
> The Reddit post I’d seen earlier? That guy got completely owned because his container was running as root. The malware could: [...]
Is that the case, though? My understanding was, that even if I run a docker container as root and the container is 100% compromised, there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?
While this is true, the general security stance on this is: Docker is not a security boundary. You should not treat it like one. It will only give you _process level_ isolation. If you want something with better security guarantees, you can use a full VM (KVM/QEMU), something like gVisor[1] to limit the attack surface of a containerized process, or something like Firecracker[2] which is designed for multi-tenancy.
The core of the problem here is that process isolation doesn't save you from whole classes of attack vectors or misconfigurations that open you up to nasty surprises. Docker is great, just don't think of it as a sandbox to run untrusted code.
I hear the "Docker is not a security boundary." mantra all the time, and IIRC it was the official stance of the Docker project a long time ago, but is this really true?
Of course if you have a kernel exploit you'd be able to break out (this is what gvisor mitigates to some extent), nothing seems to really protect against rowhammer/memory timing style attacks (but they don't seem to be commonly used). Beyond this, the main misconfigurations seem to be too wide volume bindings (e.g. something that allows access to the docker control socket from inside the container, or an obviously stupid mount like mounting your root inside the container).
And not without cause. We've been pitching docker as a security improvement for well over a decade now. And it is a security improvement, just not as much as many evangelists implied.
Not 99%. Many people run an hypervisor and then a VM just for Docker.
Attacker now needs a Docker exploit and then a VM exploit before getting to the hypervisor (and, no, pwning the VM ain't the same as pwning the hypervisor).
Agreed - this is actually pretty common in the Proxmox realm of hosters. I segment container nodes using LXC, and in some specific cases I'll use a VM.
Not only does it allow me to partition the host for workloads but I also get security boundaries as well. While it may be a slight performance hit the segmentation also makes more logical sense in the way I view the workloads. Finally, it's trivial to template and script, so it's very low maintenance and allows for me to kill an LXC and just reprovision it if I need to make any significant changes. And I never need to migrate any data in this model (or very rarely).
Virtual machines are treated as a security boundary despite the fact that with enough R&D they are not. Hosting minecraft servers in virtual machines is fine, but not a great idea if they’re cohosted on a machine that has billions of dollars in crypto or military secrets.
Docker is pretty much the same but supposedly more flimsy.
Both have non-obvious configuration weaknesses that can lead to escapes.
> Virtual machines are treated as a security boundary despite the fact that with enough R&D they are not. Hosting minecraft servers in virtual machines is fine, but not a great idea if they’re cohosted on a machine that has billions of dollars in crypto or military secrets.
While I generally agree with the technical argument, I fail to see the threat model here. Is it that some external threat would have prior knowledge that an important target is in close proximity to a less hardened one? It doesn't seem viable to me for nation states to spend the expensive R&D to compromise hobbyist-adjacent services in a hope that they can discover more valuable data on the host hypervisor.
Once such expensive malware is deployed, there's a huge risk that all the R&D money is spent on potentially just reconnaissance.
I think you’re missing the point, which was that high value targets adjacent to soft targets make escapes a legitimate target, but in low value scenarios vm escapes aren’t worth the R&D
Firstly, the attacker just wants to mine Monero with CPU, they can do that inside the container.
Second, even if your Docker container is configured properly, the attacker gets to call themselves root and talk to the kernel. It's a security boundary, sure, but it's not as battle-tested as the isolation of not being root, or the isolation between VMs.
Thirdly, in the stock configuration processes inside a docker container can use loads of RAM (causing random things to get swapped to disk or OOM killed), can consume lots of CPU, and can fill your disk up. If you consider denial-of-service an attack, there you are.
Fourthly, there are a bunch of settings that disable the security boundary, and a lot of guides online will tell you to use them. Doing something in Docker that needs to access hot-plugged webcams? Hmm, it's not working unless I set --privileged - oops, there goes the security boundary. Trying to attach a debugger while developing and you set CAP_SYS_PTRACE? Bypasses the security boundary. Things like that.
If the container is running in privileged mode you can just talk to the docker socket to the daemon on the host, spawn a new container with direct access to the root filesystem, and then change anything you want as root.
Notably, if you run docker-in-docker, Docker is probably not a security boundary. Try this inside any dind container (especially devcontainers): docker run -it --rm --pid=host --privileged -v /:/mnt alpine sh
I disagree with other commenters here that Docker is not a security boundary. It's a fine one, as long as you don't disable the boundary, which is as easy as running a container with `--privileged`. I wrote about secure alternatives for devcontainers here: https://cgamesplay.com/recipes/devcontainers/#docker-in-devc...
Containers are never a security boundary. If you configure them correctly, avoid all the footguns, and pray that there's no container escape vulnerabilities that affect "correctly" configured containers then they can be a crude approximation of a security boundary that may be enough for your use case, but they aren't a suitable substitute for hardware backed virtualization.
The only serious company that I'm aware of which doesn't understand that is Microsoft, and the reason I know that is because they've been embarrassed again and again by vulnerabilities that only exist because they run multitenant systems with only containers for isolation
Virtual machines are never a security boundary. If you configure them correctly, avoid all the footguns, and pray that there's no VM escape vulnerabilities that affect "correctly" configured VMs then they can be a crude approximation of a security boundary that may be enough for your use case, but they aren't a suitable substitute for entirely separate hardware.
Yeah, in some (rare) situations physical isolation is a more appropriate level of security. Or if you want to land somewhere in between, you can use VM's with single tenant NUMA nodes.
But for a typical case, VM's are the bare minimum to say you have a _secure_ isolation boundary because the attack surface is way smaller.
In the end you need to configure it properly and pray there's no escape vulnerabilities. The same standard you applied to containers to say they're definitely never a security boundary. Seems like you're drawing some pretty arbitrary lines here.
You really need to use user namespaces to get this kind of security protection -- running as root inside a container without user namespaces is not secure. Yes, breakouts often require some other bug or misconfiguration but the margin for error is non-existent (for instance, if you add CAP_SYS_PTRACE to your containers it is trivial to break out of them and container runtimes have no way of protecting against that). Almost all container breakouts in the past decade were blocked by user namespaces.
Unfortunately, user namespaces are still not the default configuration with Docker (even though the core issues that made using them painful have long since been resolved).
Container escapes exist. Now the question is whether the attacker has exploited it or not, and what the risk is.
Are you holding millions of dollars in crypto/sensitive data? Better assume the machine and data is compromised and plan accordingly.
Is this your toy server for some low-value things where nothing bad can happen besides a bit of embarrassment even if you do get hit by a container escape zero-day? You're probably fine.
This attack is just a large-scale automated attack designed to mine cryptocurrency; it's unlikely any human ever actually logged into your server. So cleaning up the container is most likely fine.
I think a root container can talk to docker daemon and launch additional containers...with volume mounts of additional parts of file system etc. Not particularly confident about that one though
Unintentional vulnerabilities in Docker and the kernel aside, it can only do that if it has access to the Docker API (usually through a bind mount of the Unix socket). Having access to the Docker API is equivalent to having root on the host.
Well $hit. I have been using Docker for installing NPM modules in interactive projects I was testing out. I believed Docker blocked access to the underlying host (my computer).
Thanks for mentioning it - but now... how does one deal with this?
If you didn’t mount docker.sock or any directory above it (i.e. / or /run by default) or run your containers as --privileged, you’re probably fine with respect to this angle. I’d still recommend rootless containers under unprivileged users* or VMs for extra comfort. Qubes (https://www.qubes-os.org/) is good, even if it’s a little clunkier than it could be.
* but if you’re used to bind-mounting, they’ll be a hassle
Edit: This is by no means comprehensive, but I feel compelled to point it out specifically for some reason: remember not to mount .git writable, folks! Write access to .git is arbitrary code execution as whoever runs git.
As sibling mentioned, unless you or the runtime explicitly mount the docker socket, this particular scenario shouldn't affect you.
You might still want to tighten things up. Just adding on the "rootless" part - running the container runtime as an unprivileged user on the host instead of root - you also want to run npm/node as unprivileged user inside the container. I still see many defaulting to running as root inside the container since that's the default of most images. OP touches on this.
For rootless podman, this will run as a user with your current uid and map ownership of mounts/volumes:
There would be, but a lot of docker containers are misconfigured or unnecessarily privileged, allowing for escape.
Also, if you've been compromised, you may have a rootkit that hides itself from the filesystem, so you can't be sure of a file's existence through a simple `ls` or `stat`.
> but a lot of docker containers are misconfigured or unnecessarily privileged, allowing for escape
Honestly, citation needed. Very rare unless you're literally giving the container access to write to /usr/bin or other binaries the host is running, to reconfigure your entire /etc, access to sockets like docker's, or some other insane level of over reach I doubt even the least educated docker user would do.
While of course they should be scoped properly, people act like some elusive 0-day container escape will get used on their minecraft server or personal blog that has otherwise sane mounts, non-admin capabilities, etc. You arent that special.
As a maintainer of runc (the runtime Docker uses), if you aren't using user namespaces (which is the case for the vast majority of users) I would consider your setup insecure.
And a shocking number of tutorials recommend bind-mounting docker.sock into the container without any warning (some even tell you to mount it "ro" -- which is even funnier since that does nothing). I have a HN comment from ~8 years ago complaining about this.
Half the vendor software I come across asks you to mount devices from the host, add capabilities or run the container in privileged mode because their outsourced lowest bidder developers barely even know what a container is. I doubt even the smallest minority of their customers protest against this because apparently the place I work at is always the first one to have a problem with it.
>there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?
non necessary vulnerability per. se. Bridged adapter for example lets you do a lot - few years ago there were a story of something like how a guy got a root in container and because the container used bridged adapter he was able to intercept traffic of an account info updates on GCP
Docker containers with root have rootish rights on the host machine too because the userid will just be 0 for both. So if you have, say, a bind mount that you play fast and loose with, the docker user can create 0777 files outside the docker container, and now we're almost done. Even worse if "just to make it work" someone runs the container with --privileged and then makes the terminal mistake of exposing that container to the internet.
I believe they meant you could create an executable that is accessible outside the container (maybe even as setuid root one), and depending on the path settings, it might be possible to get the user to run it on the host.
Imagine naming this executable "ls" or "echo" and someone having "." in their path (which is why you shouldn't): as long as you do "ls" in this directory, you've ran compromised code.
There are obviously other ways to get that executable to be run on the host, this just a simple example.
Another example is they would enumerate your directories and find the names of common scripts and then overwrite your script. Or to be even sneakier, they can append their malicious code to an existing script in your filesystem. Now each time you run your script, their code piggybacks.
OTH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here.
The $HOME/.{aws,docker,claude,ssh}
Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.
That was added as an edit. It does not cover the inaccuracies contained within. It should more realistically say "this article was generated by an LLM and may contain several errors which I didn't bother to find or correct."
I have a similar setup but with following additional security configurations:
- Hetzner firewall (because ufw doesn't work well with docker) to only allow public access to 443.
- Self-hosted OpenVPN to access all private ports. I also self-host an additional Wireguard instance as a backup VPN.
- Cloudflare Access to protect `*.coolifydomain.com` by default. This would have helped protect the OP's Umami setup since only the OP can access the Umami dashboard. Bypass rules can be created in Cloudflare Access to allow access to other systems that need access using IP or domain.
- Cloudflare Access rules to only allow access to internal admin path such as /wp-admin/ through my VPN IP (or via email OTP to specified email ids).
- Traefik labels on docker-compose files in Coolify to add basic auth to internal services which can't be behind Cloudflare Access such as self-hosted Prefect. This would have also added a login screen before an attacker would see Umami's app.
- I host frontends only on Vercel or Cloudflare workers and host the backend API on the Coolify server. Both of these are confirmed to never have been affected, due to decoupling of application routing.
- Finally a bash cron script running on server every 5 minutes that monitors the resources and sends me an alert via Pushover when the usages are above the defined thresholds. We need monitoring and alert as well, security measures alone are not enough.
Even with all these steps, there will always be edge cases. That's the caveat of self-hosting but at the same time it's very liberating.
Hahaha OP could be in deep trouble depending on what types of creds/data they had in that container. I had replied to a child comment but I figure best to reply to OP.
From the root container, depending on volume mounts and capabilities granted to the container, they would enumerate the host directories and find the names of common scripts and then overwrite one such script. Or to be even sneakier, they can append their malicious code to an existing script in the host filesystem. Now each time you run your script, their code piggybacks.
OTOH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here.
The $HOME/.{aws,docker,claude,ssh}
Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.
Luckily umami in docker is pretty compartimentalized. All data is in the and the DB runs in another container. The biggest thing is the DB credentials. The default config requires no volume mounts so no worries there. It runs unprivileged with no extra capabilities. IIRC don't think the container even has bash, a few of the exploits that tried to run weren't able to due to lack of bash in the scripts they ran.
Deleting and remaking the container will blow away all state associated with it. So there isn't a whole lot to worry about after you do that.
I've bene working with a GPU security company for the last few months... I can tell you that neo clouds (generally) do not see security as a high priority—or often, even their responsibility. Many do not have hte ability to even know if your GPUs have been compromised and they expect you'll take responsibility.
Meanwhile companies think the clouds are looking at it.... anyhow. it is a real problem.
> I can tell you that neo clouds (generally) do not see security as a high priority—or often, even their responsibility.
AWS explicitly spells this out in their Shared Responsibility Model page [0]
It is not your cloud provider's responsibility to protect you if you run outdated and vulnerable software. It's not their responsibility to prevent crypto-miners from running on your instances. It's not even their responsibility to run a firewall, though the major players at least offer it in some form (ie, AWS Security Groups and ACL).
All of that is on the customer. The provider should guarantee the security of the cloud. The customer is responsible for security in the cloud.
Interesting that this got posted today, I also have a server on Hetzner (although I don't think it's relevant) and noticed yesterday that a Monero miner had been installed.
Luckily for me, the software I had installed[1] was in an LXC container running under Incus, so the intrusion never escaped the application environment, and the container itself was configured with low CPU priority so I didn't even notice it until I tried to visit the page and it didn't load.
I looked around a bit and it seemed like an SSH key had been added under the root user, and there were some kind of remote management agents installed. This container was running Alpine so it was pretty easy to identify what processes didn't belong from a simple ps output of the remaining processes after shutting down the actual web application.
In the end, I just scrapped the container, but I did save it in case I ever feel like digging around (probably not). In the end I did learn some useful things:
- It's a good idea to assume your system will get taken over, so ensure it's isolated and suitably resource constrained (looking at you, pay-as-you-go cloud users).
- Make sure you have snapshots and backups, in my case I do daily ZFS snapshots in Incus which makes rolling back to before the intrusion a breeze.
- While ideally anything compromised should be scrapped, rolling back, locking it down and upgrading might be OK depending on the threat.
Regarding the miner itself:
- from what I could see in its configuration it hadn't actually been correctly configured, so it's possible they do some kind of benchmark and just leave the system silently compromised if it's not "worth it", they still have a way in to use it for other purposes.
- no attempt had been made at file system obfuscation, which is probably the only reason I really discovered it. There were literally folders in /root lying around with the word "monero" in them, this could have been easily hidden.
- if they hadn't installed a miner and just silently compromised the system, leaving whatever running on it alone (or even doing a better job at CPU priority), I probably never would have noticed this.
I love mining malware - it's reasonably visible and causes almost no damage. Essentially, it's like a bug bounty program that you don't have to manage, doesn't generate costly bullshit reports, and only costs you a few bucks of electricity when a vulnerability is found.
If you have decent network or process level monitoring, you're likely to find it, while you might not realize the vulnerable software itself or some stealthier, more dangerous malware that might exploit it.
I had to nuke my Oracle Cloud box that runs my Umami server. It got hit. Was a good excuse to upgrade version and upgrade all my backup systems etc. Lost a few hours of data while it was returning 500 errors.
Enable connection tracking (if it's not already) and keep looking at the conntrack entires. That's a good way to spot random things doing naughty stuff.
What's considered nowadays the best practice (in terms of security) for running selfhosted workloads with containers? Daemon less, unprivileged podman containers?
And maybe updating container images with a mechanism similar to renovate with "minimumReleaseTime=7days" or something similar!?
You’ll set yourself up for success if you check the dependencies of anything you run, regardless of it being containerised. Use something like Snyk to scan containers and repositories for known exploits and see if anything stands out.
Then you need to run things with as least privilege as possible. Sadly, Docker and containers in general are an anti-pattern here because they’re about convenience first, security second. So the OP should have run the contains as read-only with tight resource limits and ideally IP restrictions on access if it’s not a public service.
Another thing you can do is use Tailscale, or something like it, to keep things being a zero trust, encrypted, access model. Not suitable for public services of course.
As always: never run containers as root. Never expose ports to the internet unless needed. Never give containers outbound internet access. Run containers that you trust and understand, and not random garbage you find on the internet that ships with ancient vulnerabilities and a full suite of tools. Audit your containers, scan them for vulnerabilities, and nuke them from orbit on the regular.
Easier said than done, I know.
Podman makes it easier to be more secure by default than Docker. OpenShift does too, but that's probably taking things too far for a simple self hosted app.
I'd argue mining malware is a net benefit to society. I'd much rather have my vulnerable server exploited by mining malware than left alone. If it gets exploited by mining malware, it gives me an extra chance to catch it before something actually bad happens, at almost zero cost to myself and a small reward for the person who found the vuln.
Criminals and the porn industry are almost invariably early adopters of new technologies. For better or worse their use-cases are proof-of-concepts that get expanded and built on, if successful, by more legitimate industries.
That virtually all ends up in the spam folder. Neither half the revenue generated nor half of the utility people extract from it is criminal. I don't know a single person who has used Monero to conduct non-criminal business.
Well, because crypto has been a crime for it's entire lifetime. Any of my friends who got funding to work on crypto got debanked, fined, jail time etc. Throughout the entire Obama-Biden era. Only under the latest Trump admin are there new VC funding and things like ETF's, stablecoin settlements, and crossborder regulation being "Accepted" even though we have no legal framework.
So... Crypto is illegal so anyone using is defacto a criminal by definition.
Also, for this particular instance this is the best bug bounty program i've ever seen. Running a monero node that hits your daily budget cap is not that bad... It could be way worse like steal you DB creditials and sell it to the highest party... So, crypto actually made this better.
regardless of firewalls and best practices and all, i'd just put all sorts of admin-related stuff behind tailscale (or similar), including ssh. hetzner allows you to have ssh closed in your public ip and still open a terminal via the console if ssh-on-tailscale fails. for the web stuff you should do a similar trick. blog and public websites on the public address, while admin stuff goes on tailscale. and if you do it nicely with letsencrypt you can even have nice hostnames pointing to your private stuff.
I don't think using key-based authentication for SSH and enabling Fail2ban is necessary. Fail2ban is only useful if you keep password authentication. But I might be wrong.
My intuition is that since the SSH server reports what auth methods are available, once a bot sees that password auth is disabled, they will disconnect and not try again.
As a user of iptables this order makes me anxious. I used to cut myself out from the server many times because first blocking then adding exceptions. I can see that this is different here as the last command commits the rules...
I had this one too: I first denied all incoming requests and was about to allow SSH, but my SSH connection dropped :) Fortunately, I was able to restore the VM with the provider's VM console.
I find it interesting that the recent trend of moving to self-hosted solutions is sparking this rediscovery of security issues that come with self-hosting. One more time and it will be a cycle!
What trend? All I'm seeing here is further centralisation:
Search engines try to fight slop results with collateral damage mostly in small or even personal websites. Restaurants are happy to be on one platform only: Google Maps. Who needs an expensive website if you're on there and someone posts your menu as one of the pictures? (Ideally an old version so the prices seem cheaper and you can't be pinned down for false advertising.) Open source communities use Github, sometimes Gitlab or Codeberg, instead of setting up a Forgejo (I host a ton of things myself but notice that the community effect is real and also moved away from self hosting a forge). The cherry in top is when projects use Discord chats as documentation and bug reporting "form". Privacy people use Signal en masse, where Matrix is still as niche as it was when I first heard of it. The binaries referred to as open source just because they're downloadable can be found on huggingface, even the big players use that exclusively afaik. Some smaller projects may be hosted on Github but I have yet to see a self-hosted one. Static websites go on (e.g. Github) Pages and back-ends are put on Firebase. Instead of a NAS, individuals as well as small businesses use a storage service like Onedrive or Icloud. Some more advanced users may put their files on Backblaze B2. Those who newly dip their toes in self-hosting increasingly use a relay server to reach their own network, not because they need it but to avoid dealing with port forwarding or setting up a way to privately reach internal services. Security cameras are another good example of this: you used to install it, set a password, and forward the port so you can watch it outside the home. Nowadays people expect that "it just works" on their phone when they plug it in, no matter where they are. That this relies on Google/Amazon and that they can watch all the feeds is acceptable for the convenience. And that's all not even mentioning the death of the web: people who don't use websites anymore the way they were meant (as hyperlinked pages) but work with an LLM as their one-stop shop
Not that the increased convenience, usability, and thus universal accessibility of e.g. storage and private chats is necessarily bad, but the trend doesn't seem to me as you seem to think it is
I can't think of any example of something that became increasingly often self-hosted instead of less across the last 1, 5, or 10 years
If you see a glimmer of hope for the distributed internet, do share because I feel increasingly as the last person among my friends who hosts their own stuff
it's slow, but there are people slowly turning towards a more decoupled internet, the problem is that you still *have* to use cloudflare (or any kind of http proxy), it's just a basic requirement that you can't avoid for anything that people would be interested in keeping offline.
I've been on the receiving end of attacks that were reported to be the size of more than 10tbps I couldn't imagine how I would deal with that if I didn't have a 3rd party providing such protection - it would require millions $$ a year just in transit contracts.
There is an increasing amount of software that attempts to reverse this, but as someone from https://thingino.com/ said: opensource is riddled with developers that died to starvation (nobody donates to opensource projects).
> "No more exposed PostgreSQL ports, no more RabbitMQ ports open to the internet."
Yikes. I would still recommend a server rebuild. That is _not_ a safe configuration in 2025, whatsoever. You are very likely to have a much better engineered persistent infection on that system.
Also, apparently they run an IoT platform for other users on the same host that cannot only visualize sensors, but also trigger (mains-powered) devices.
The right thing to do is to roll out a new server (you have a declarative configuration right?), migrate pure data (or better, get it from the latest backup), remove the attacked machine off the internet to do a full audit. Both to learn about what compromises there are for the future and to inform users of the IoT platform if their data has been breached. In some countries, you are even required by law to report breaches. IANAL of course.
I was surprised by the same thing, a tool I run that uses Next.js without me knowing before.
I noticed this, because Hetzner forwarded me an email from the German government agency for IT security (BSI) that must have scanned all German based IP addresses for this Next.js vulnerability. It was a table of IP addresses and associated CVEs.
Great service from their side, and a lot of thanks to the German tax payers wo fund my free vulnerability scans ;)
I am not an expert in incident reaction, but I thought the safe way was to image the affected machine, turn it off, take a clean machine, boot a clean OS image with the affected image mounted read only in a VM, and do the investigation like that ?
Assume that the malware has replaced system commands, possibly used a kernel vulnerability to lie to you to hide its presence, so do not do anything in the infected system directly ?
Maybe if your company infrastructure is affected but not the server you use to host your side projects on with “coolify” unless IT security is your hobby.
I took issue with this paragraph of the article, on account of several pieces of misinformation, presumably courtesy of Claude hallucinations:
> Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.
1. Files of running programs can be deleted while the program is running. If the program were trying to hide itself, it would have deleted /tmp/.XIN-unix/javae after it started. The nonexistence of the file is not a reliable source of information for confirming that the container was not escaped.
2. ps shows program-controlled command lines. Any program can change what gets displayed here, including the program name and arguments. If the program were trying to hide itself, it would change this to display `login -fp ubuntu` instead. This is not a reliable source of information for diagnosing problems.
It is good to verify the systemd units and crontab, and since this malware is so obvious, it probably isn't doing these two hiding methods, but information-stealing malware might not be detected by these methods alone.
Later, the article says "Write your own Dockerfiles" and gives one piece of useless advice (using USER root does not affect your container's security posture) and two pieces of good advice that don't have anything to do with writing your own Dockerfiles. "Write your own Dockerfiles" is not useful security advice.
Author forgot one important take: to limit the attack surface what doesn't need to be public facing should not be public facing. Things such as analytics software can easily be accessed through wireguard or even simpler using ssh as socks proxy.
After reading some comments: this probably goes without saying, but one should be very careful what to expose to the internet. Sounds like the analytics-service maybe could have been available only over VPN (or similar, like mTLS etc.)
And for basic web sites, it's much better if it requires no back-end.
Every service exposed increases risk and requires additional vigilance to maintain. Which means more effort.
You can run Docker Scout on one repo for free, and that would alert you that something was using Next.js and had that CVE. AWS ECR has pretty affordable scanning too: 9 cents/image and 1 cent/rescan. Continuous scanning even for these home projects might be worth it.
> Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.
/tmp/.XIN-unix/javae &
rm /tmp/.XIN-unix/javae
This article’s LLM writing style is painful, and it’s full of misinformation (is Puppeteer even involved in the vulnerability?).
Seconding what others have said about preferring to read bad human writing. And I don’t want to pick on you – this is a broadly applicable message prompted by a drop in the bucket – but please don’t publish articles beyond your ability to fact check. Just write what you actually know, and when you’re making a guess or you still have open questions at the end of your investigation, be honest about that. (People make mistakes all the time anyway, but we’re in an age where confident and detailed mistakes have become especially accessible.)
Hi Jake! Cool article, and it's something I'll keep in mind when I start giving my self-hosted setup a remodel soon. That said, I have to agree with the parent comment and say that the LLM writing style dulled what would otherwise have been a lovely sysadmin detective work article and didn't make me want to explore your site further.
I'm glad you're up to writing more of your own posts, though! I'm right there with you that writing is difficult, and I've definitely got some posts on similar topics up on my site that are overly long and meandering and not quite good, but that's fine because eventually once I write enough they'll hopefully get better.
There is nothing wrong with this article. Please continue to write as you; it's what people came for.
LLMs have their place. I find it useful to prompt an LLM to fix typos and outright errors and also prompt them to NOT alter the character or tone of the text; they are extraordinarily good at that.
You haven't confirmed this (at least from the contents of the article). You did some reasonable spot checks and confirmed/corrected your understanding of the setup. I'd agree that it looks likely that it did not escape or gain persistence on your host but in no way have you actually verified this. If it were me I'd still wipe the host and set up everything from scratch again[0].
Also your part about the container user not being root is still misinformed and/or misleading. The user inside the container, the container runtime user, and whether container is privileged are three different things that are being talked about as one.
[0]: Not necessarily drop-everything-you-do urgently but next time you get some downtime to do it calmly. Recovering like this is a good excercise anyway to make sure you can if you get a more critical situation in the future where you really need to. It will also be less time and work vs actually confirming that the host is uncontaminated.
I did see your comment on Firewall, and you're right about the escape. It seems safe enough for now. Between the hacking and accidentally hitting the front page of HN it's been a long day.
I'm going to sit down and rewrite the article and take a further look at the container tomorrow.
Before rewriting the article, roll out a new server. Seriously. It seems you do not have the skills yet to do a proper audit. It’s better to roll out a pristine server. If that is a lot of work, it is a good moment to learn about declarative system configuration.
At any rate, this happening to you sucks! Hugs from a fellow HN user, I know that things like this can suck up a lot of time and energy. It’s courageous to write about such an incident incident, I think it’s useful to a lot of other people too, kudos!
Hey, thanks for taking the time to share your learnings and engage. I'm sure there are HN readers out there who will be better off for it alongside you!
(And good to hear you're leaving the LLMs out of the writing next time <3)
> Should I treat this as some datapoint on monero’s security having held up well so far?
No. The reason attackers mine Monero and not some other cryptocurrency isn't the anonymity. Most cryptocurrencies aren't (meaningfully) mineable with CPUs, Monero (apparently) is. There may be others, but I suspect that they either deliver less $ per CPU-second (due to not being valuable), or are sufficiently unknown and/or painful to set up that the attackers just go with the known default.
Trying to mine Bitcoin directly would be pointless, you'd get no money because you're competing with ASIC miners. Some coins were designed to be ASIC resistant, these are mostly mined on GPUs. Monero (and some other coins) were designed to also be GPU resistant (I think). You could see it as a sign that that property has held up (well enough), but nothing else.
The main reason to use Monero for stuff like this is their mining algo. They made big efforts and changed algorithms several times to make and keep it GPU and ASIC resistant.
If you used the server to mine Bitcoin, you would make approximately zero (0) profit, even if somebody else pays for the server.
But also yes, Monero has technically held up very well.
Something similar happened to me last year, it was with an unsecured user account accessible over ssh with password authentication, something like admin:admin that I forgot about.
At least that's what I think happened because I never found out exactly how it was compromised.
The miner was running as root and it's file was even hidden when I was running ls ! So I didn't understand what was happening, it was only after restarting my VPS from with a rescue image, and after mounting the root filesystem, that I found out the file I was seeing in the processes list did indeed exist.
I wonder in a case like this how hard it would be to "steal" the crypto that you've paid to mine. But I assume these people are probably smart enough to where everything is instantly forwarded to their C&C server to prevent that.
This Monero mining also happened with one of my VPS over at interserv.net, when I forgot to log out of the root console in web-based terminal console to one of my VPS and closed its browser tab instead.
> when I forgot to log out of the root console in web-based terminal console to one of my VPS and closed its browser tab instead.
This should not enable the compromise of your session, i.e. either there was some additional vulnerability (and not logging out widened the window during which it could be exploited), or the attack was totally unrelated. Either way, you should find out the real cause.
If I’m not wrong, a hetzner VM by default has no firewall enabled. If you are coming from providers with different default settings, that might bite you. Containers that you thought were not open to internet have been open all this time. Two firewalls failed: They bypassed ufw and there was no external firewall either.
You have to define a firewall policy and attach it to the VM.
I know we aren't supposed to rely on containers as a security boundary, but it sure is great hearing stories like this where the hack doesn't escape the container. The more obstacles the better I guess.
The first step I would take is running podman instead of Docker to prevent container escapes. Podman can be run truly rootless and doesn't mess with your firewall. Next I would drop all caps if possible.
What's the difference between running Podman and running Docker in rootless mode? (Other than Docker messing with the firewall, which apparently OP doesn't know about… yet). I understand Podman doesn't require a daemon, but is that all there is to it, or is there something I'm missing?
The runtime has been designed from the ground up to be run daemonless and rootless. They also have a K8s runtime, that has an extremely small surface, just enough to be K8s compliant.
But podman has also great integration with systemd. With that you could use a socket activated systemd unit, and stick the socket inside the container, instead of giving the container any network at all. And even if you want networking in the container, the podman folks developed slirp4netns, which is user space networking, and now something even better: passt/pasta.
I had something similar, but I was lucky enough to catch it myself. I've used SSH for years, and never knew that it - by default - also accepts password logins. Maybe dumb on my part, but there you go...
Would "user root" without --privileged and excessive mounts have enabled a container escape, or just exposed additional attack surface that potentially could have allowed the attacker to escape if they had another exploit?
They would need a vulnerability in containerd or the kernel to escape the sandbox and being root in the sandbox would give them more leeway to exploit that vulnerability.
But if they do have a vulnerability and manage to escape the sandbox then they will be root on your host.
Running your processes as an unprivileged user inside your containers reduces the possibility of escaping the sandbox, running your containers themselves as un unprivileged user (rootless podman or docker for example) reduces the attack surface when they manage to escape the sandbox.
Would it be better for the victim if that was ransomware (asking for Apple gift cards) or some malware that stealthily siphons off data until it finds something valuable?
As an aside, if you're using a Hetzner VPS for Umami you might be over-specced. I just cut my Hetzner bill by $4/mo by moving my Umami box to one of the free Oracle Cloud VPS after someone on here pointed out the option to me. Depends whether this is a hobby thing or something more serious, but that option is there.
All fine and well, but oracle will threaten to turn off your instance if you don’t maintain a reasonable average CPU usage on the free hosts, and will eventually do so abruptly.
This became enough of a hassle that I stopped using them.
Do you mean if it’s idle, or if it’s maxed out? I’ve had a few relatively idle free-tier VMs with Oracle and I’ve not received any threats of shutoff over the last 3 years I’ve had them online.
I assumed the same, but as long as you keep a credit card on file apparently they will let you idle it too. I went in and set my max budget at $1/mo and set alerts too, just in case.
You might want to harden that those outbound firewall rules as another step. Did the Umami container need the ability to initiate connections? If not, that would eliminate the ability to do the outbound scans.
Also could prevent something to exfiltrate sensitive data.
ASICs do dominate Bitcoin mining but Monero's POW algorithm is supposed to be ASIC resistant. Besides, who cares if it's efficient when it's someone else's server?
Monero's proof of work (RandomX) is very asic-resistant and although it generates a very small amount of earnings, if you exploit a vulnerability like this with thousands or tens of thousands of nodes, it can add up (8 modern cores 24/7 on Monero would be in the 10-20c/day per node range). OPs Vps probably generated about $1 for those script kiddies.
I think so, but it is hard to say. Could be a lot of people with extra power (or stolen power), but their own equipment. I mine myself with waste solar power
If it's cold and you're going to be running a heater anyways, then if your heat is resistive, then running a cryptominer is just as efficient and returns a couple dollars back to you. It effectively becomes "free" relative to running the heater.
If you use a heat pump, or you rely on burning something (natural gas, wood, whatever) to generate heat, then the math changes.
This is the PoW scheme that Monero currently uses:
> RandomX utilizes a virtual machine that executes programs in a special instruction set that consists of integer math, floating point math and branches.
> These programs can be translated into the CPU's native machine code on the fly (example: program.asm).
> At the end, the outputs of the executed programs are consolidated into a 256-bit result using a cryptographic hashing function (Blake2b).
I doubt that you anyone managed to create an ASIC that does this more efficiently and cost effective than a basic CPU.
So, no, probably no one is mining Monero using an ASIC.
Yes, for Monero it is the only real viable option. I'd also assume that the OP's instance is one of many other victims whose total mining might add up to a significant amount of crypto.
If the effectiveness of mining is represented as profit divided by the cost of running the infrastructure, then a CPU that someone else is paying for is worth it as long as the profit is greater than zero.
Are you referring to user namespaces and, if so, how does that kind of break out to host root work? I thought the whole point of user namespaces was your UID 0 inside the container is UID 100000 or whatever from the perspective of outside the container. Escaping the container shouldn't inherently grant you ability to change your actual UID in the host's main namespace in that kind of setup, but I'm not sure Docker actually leverages user namespaces or not.
E.g. on my systemd-nspawn setup with --private-users=pick (enables user namespacing) I created a container and gave it a bind mount. From the container it appears like files in the bind mount created by the container namespace's UID 0 are owned by UID 0 but from outside the container the same file looks owned by UID 100000. Inverted, files owned by the "real" UID 0 on the host look owned by 0 to the host but as owned by 65534 (i.e. "nobody") from the container's perspective. Breaking out of the container shouldn't inherently change the "actual" user of the process from 100000 to 0 any more than breaking out of the container as a non-0 UID in the first place - same as breaking out of any of the other namespaces doesn't make the "UID 0" user in the container turn into "UID 0" on the host.
Users in user namespaces are granted capabilities that root has, user namespaces themselves need to be locked down to prevent that, but if a user with root capabilities escapes the namespace, they have the capabilities on the host.
They also expose kernel interfaces that, if exploited, can lead to the same.
In the end, namespaces are just for partitioning resources, using them for sandboxes can work, but they aren't really sandboxes.
This article is very interesting at first but I once again get disappointed after reading clear signs of AI like "Why this matters" and "The moment of truth", and then the whole thing gets tainted with signs all over the place.
Yeah personally I’d much rather read a poorly constructed article with actually interesting content than the same content put into the formulaic AI voice.
>Edit: A few people on HN have pointed out that this article sounds a little LLM generated. That’s because it’s largely a transcript of me panicking and talking to Claude. Sorry if it reads poorly, the incident really happened though!
For what it's worth, this is not an excuse, and I still don't appreciate being fed undisclosed slop. I'm not even reading it.
Hahaha, I did tell him this afternoon. This is the bloke who has the same password for all his banking apps despite me buying him 1password though. The imminent threat from RCE's just didn't land.
Buying someone 1Pass, or the like, and calling it good is not enough. People using password managers forget how long it takes to visit all of the sites you use to create that site's record, then update the password to a secure one, and then log out and log back in with the new password to test it is good. For a lot of people having a password manager bought for them is going to be over it after the second site. Just think about how many videos on TikTok they could have been watching instead
Yeah, mom and I sat down one afternoon and we changed all of her passwords to long, secure ones, generated by 1Password. It was a nice time! It also helped her remember all of the different services she needs to access, and now they're all safely stored with strong passwords. And it was a nice way to connect and spend some time together. :)
tl:dr: He got hacked but the damage was only restricted to one docker container runn ing Umami (that is built on top of NextJS). Thankfully, he was running the docker container as a non privileged non-root user which saved him big time considering the fact that the attack surface was limited only within the container and could not access the entire host/filesystem.
Is there ever a reason someone should run a docker container as root ?
If you're using the container to manage stuff on the host, it'll likely need to be a process running as root. I think the most common form of this is Docker-in-Docker style setups where a container is orchestrating other containers directly through the Docker socket.
As others have mentioned, a firewall might have been useful in restricting outbound connections to limit the usefulness of the machine to the hacker after the breach.
An inbound firewall can only help protect services that aren't meant to be reachable on the public internet. This service was exposed to the internet intentionally so a firewall wouldn't have helped avoid the breach.
The lesson to me is that keeping up with security updates helps prevent publicly exposed services from getting hacked.
I also run Umami, but patched once the CVE patch was released. Also, I only expose the tracking js endpoint and /api/send via Caddy publically (though, /api/send might be enough to exploit the vul). To actually interact with Umami UI I use Twingate (similar to Tailscale) to tunnel into the VPC locally.
I still can't believe that there are so many people out here popping boxen and all they do is solve drug sudokus with the hardware. Hacks are so lame now.
Unless ran as root this could return file not found because of missing permissions, and not just because the file doesn't actually exist, right?
> “I don’t use X” doesn’t mean your dependencies don’t use X
That is beyond obvious, and I don't understand how anyone would feel safe from reading about a CVE on a widely used technology when they run dozens of containers on their server. I have docker containers and as soon as I read the article I went and checked because I have no idea what technology most are built with.
> No more Umami. I’m salty. The CVE was disclosed, they patched it, but I’m not running Next.js-based analytics anymore.
Yeah, my Umami box was hit, but the time between the CVE disclosure and my box getting smacked was incredibly low. Umami patched it very quickly. And then patched it again a second time when the second CVE dropped right after.
Nothing is immune. What analytics are you going to run? If you roll your own you'll probably leave a hole somewhere.
Stupid question probably, but: how can it not be routed through a firewall? If you have it at home, it's behind a router that should have a firewall already, right? And just forwards the one port you expose to the server?
Cloudflare can certainly do more (e.g. protect against DoS and hide your personal IP if your server is at home).
Not expose the server IP is one practice (obfuscation) in a list of several options.
But that alone would not solve the problem being a RCE from HTTP, that is why edge proxy provider like Cloudflare[0] and Fastfy[1] proactivily added protections in his WAF products.
Even cloudflare had an outage trying to protect his customers[3].
DNS is no issue. External DNS can be handled by Cloudflare and their waf. Their DNS service can can obsfucate your public IP, or ideally not need to use it at all with a Cloudflare tunnel installed directly on the server. This is free.
Backend access trivial with Tailscale, etc.
Public IP doesn't always need to be used. You can just leave it an internal IP if you really want.
The tunnel doesn't have to use the Public IP inbound, the cloudflare tunnel calls outbound that can be entirely locked up.
If you are using Cloudflare's DNS they can hide your IP on the dns record but it would still have to be locked down but some folks find ways to tighten that up too.
If you're using a bare metal server it can be broken up.
It's fair that it's a 3rd party's castle. At the same time until you know how to run and secure a server, some services are not a bad idea.
Some people run pangolin or nginx proxy manager on a cheap vps if it suits their use case which will securely connect to the server.
We are lucky that many of these ideas have already been discovered and hardened by people before us.
Even when I had bare metal servers connected to the internet, I would put a firewall like pfsense or something in between.
What does the tunnel bring except DoS protection and hiding your IP? And what is the security concern with divulging your IP? Say when I connect to a website, the website knows my IP and I don't consider this a security risk.
If I run vulnerable software, it will still be vulnerable through a Cloudflare tunnel, right?
Genuinely interested, I'm always scared to expose things to the internet :-).
Many ways. Using a "bastion host" is one option, with something like wireguard or tinc. Tailscale and similar services are another option. Tor is yet another option.
Free way - sign up for a cloudflare account. Use the DNS on cloudflare, they wil put their public ip in front of your www.
Level 2 is install the cloudflare tunnel software on your server and you never need to use the public IP.
Backend access securely? Install Tailscale or headscale.
This should cover most web hosting scenarios. If there's additional ports or services, tools like nginx proxy manager (web based) or others can help. Some people put them on a dedicated VPS as a jump machine.
This way using the Public IP can almost be optional and locked down if needed. This is all before running a firewall on it.
Its often possible to lock down the public IP entirely to not accept connections except what's initiated from the inside (like the cloudflare tunnel or otherwise reaching out).
Something like a Cloudflare+tunnel on one side, tailscale or something to get into it on the other.
Folks other than me have written decent tutorials that have been helpful.
I disrecommend UFW.
firewalld is a much better pick in current year and will not grow unmaintainable the way UFW rules can.
Configuration is backed by xml files in /etc/firewalld and /usr/lib/firewalld instead of the brittle pile of sticks that is the ufw rules files. Use the nftables backend unless you have your own reasons for needing legacy iptables.Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.
Newer versions of firewalld gives an easy way to configure this via StrictForwardPorts=yes in /etc/firewalld/firewalld.conf.
On ufw systems, I know what to do: if a port I need open is blocked, I run 'ufw allow'. It's dead simple, even I can remember it. And if I don't, I run 'ufw --help' and it tells me to run 'ufw allow'.
Firewall-cmd though? Any time I need to punch a hole in the firewall on a Fedora system, I spend way too long reading the extremely over-complicated output of 'firewall-cmd --help' and the monstrous man page, before I eventually give up and run 'sudo systemctl disable --now firewalld'. This has happened multiple times.
If firewalld/firewall-cmd works for you, great. But I think it's an absolutely terrible solution for anyone whose needs are, "default deny all incoming traffic, open these specific ports, on this computer". And it's wild that it's the default firewall on Fedora Workstation.
In my own use I have 10.0.10.11 on the vm that I host docker stuff. It doesn't even have its own public IP meaning I could actually expose to 0.0.0.0 if I wanted to but things might change in the future so it's a precaution. That IP is only accessible via wireguard and by the other machines that share the same subnet so reverse proxying with caddy on a public IP is super easy.
Docker also has more traps and not quite as obvious as this. For example, it can change the private IP block its using without telling you. I got hit by this once due to a clash with a private block I was using for some other purpose. There's a way to fix it in the config but it won't affect already created containers.
By the way. While we're here. A public service announcement. You probably do NOT need the userland-proxy and can disable it.
/etc/docker/daemon.json
{ "userland-proxy": false }
Some quick searching yields generic advice about keeping everything updated or running in rootless mode.
So you can create multiple addresses with multiple separate "domains" mapped statically in /etc/hosts, and allow multiple apps to listen on "the same" port without conflicts.
..actually this is very weird. Are you saying you can bind to 127.0.0.2:80 without adding a virtual IP to the NIC? So the concept of "localhost" is really an entire class A network? That sounds like a network stack bug to me heh.
edit: yeah my route table on osx confirms it. very strange (at least to me)
Also the Docker Compose tool is a well-know exception to the compatibility story. (There is some unofficial podman compose tool, but that is not feature complete and quadlets are better anyway.)
I agree with approaching podman as its own thing though. Yes, you can build a Dockerfile, but buildah lets you build an efficient OCI image from scratch without needing root. For those interested, this document¹ explains how buildah compares to podman and docker.
1. https://github.com/containers/buildah/tree/main/docs/contain...
The only non-deprecated way of having your Compose services restart automatically is with Quadlet files which are systemd unit files with extra options specific to containers. You need to manually translate your docker-compose.yml into one or more Quadlet files. Documentation for those leaves a lot to be desired too, it's just one huge itemized man page.
Same as for docker, yes?
https://docs.docker.com/engine/security/rootless/
Networking is just better in podman.
That page does not address rootless Docker, which can be installed (not just run) without root, so it would not have the ability to clobber firewall rules.
In order to stop these attacks, restrict outbound connections from unknown / not allowed binaries.
This kind of malware in particular requires outbound connections to the mining pools. Others downloads scripts or binaries from remote servers, or try to communicate with their c2c servers.
On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.
Sadly that's more of a duck tape or plaster, because any serious malware will launch their scripts with the proper '/bin/bash /path/to/dropped/payload' invocation. A non-exec mount works reasonably well only against actual binaries dropped into the paths, because it's much less common to launch them with the less known '/bin/ld.so /path/to/my/binary' stanza.
I've also at one time suggested that Debian installer should support configuring a read-only mount for /tmp, but got rejected. Too many packaging scripts depend on being able to run their various steps from /tmp (or more correctly, $TMPDIR).
If this weren't the case, plenty of containers could probably have a fully read-only filesystem.
You can also restrict outbound connections to cryptomining pools and malicious IPs. For example by using IOCs from VirusTotal or urlhaus.bazaar.ch
- Configuration management (ansible, salt, chef, puppet)
- Preconfigured images (NixOS, packer, Guix, atomic stuffs)
For a one-off: pssh
With ufw gui you need a single checkbox - block incoming connections.
It’s not like iptables was any better, but it was more intuitive as it spoke about IPs and ports, not high-level arbitrary constructs such as zones and services defined in some XML file. And since firewalld uses iptables/nftables underneath, I wonder why do I need a worse leaky abstraction on top of what I already know.
I truly hate firewalld.
I’d love a Linux firewall configured with a sane config file and I think BSD really nailed it. It’s easy to configure and still human readable, even for more advanced firewall gateway setups with many interfaces/zones.
A have no doubt that Linux can do all the same stuff feature-wise, but oh god the UX :/
I have been using for many decades both Linux and FreeBSD, on many kinds of computers.
When comparing Linux with FreeBSD, I probably do not find anything more annoying on Linux than its networking configuration tools.
While I am using Linux on my laptops and desktops and on some servers with computational purposes, on the servers that host networking services I much prefer FreeBSD, for the ease of administration.
Imagine container A which exposes tightly-coupled services X and Y. Container B should be able to access only X, container C should be able to accesd only Y.
For some reason there just isn't a convenient way to do this with Docker or Podman. Last time I looked into it, it required having to manually juggle the IP addressed assigned to the container and having the service explicitly bind to it - which is just needlessly complicated. Can firewalls solve this?
I mean there are some payload over payload like GRE VPE/VXLAN/VLAN or IPSec that needs to be written in raw nft if using Foomuuni but it works!.
But I love the Shorewall approach and your configuration gracefully encapsulated Shorewall mechanic.
Disclaimer: I maintain vim-syntax-nftables syntax highlighter repo at Github.
This sounds like great news. I followed some of the open issues about this on GitHub and it never really got a satisfactory fix. I found some previous threads on this "StrictForwardPorts": https://news.ycombinator.com/item?id=42603136.
And ufw supports nftables btw. I think the real lesson is write your own firewalls and make them non-permissive - then just template that shit with CaC.
I'm not even sure what to say, or think, or even how to feel about the frontend ecosystem at this point. I've been debating on leaving the whole "web app" ecosystem as my main employment ventures and applying to some places requiring C++. C++ seems much easier to understand than what ever the latest frontend fad is. /rant
Compare that to what we had in the late 00's and early 10's we went through prototype -> mootools -> jquery -> backbone -> angularjs -> ember -> react, all in about 6 years. Thats a new recommended framework every year. If you want to complain about fads and churn, hop on over to AI development, they have plenty.
Pick a solid technology (.NET, Java, Go, etc...) for the backend and use whatever you want for your frontend. Voila, less CVEs and less churn!
Unless you're running a static html export - eg: not running the nextjs server, but serving through nginx or similar
> If your app’s React code does not use a server, your app is not affected by this vulnerability. If your app does not use a framework, bundler, or bundler plugin that supports React Server Components, your app is not affected by this vulnerability.
https://react.dev/blog/2025/12/03/critical-security-vulnerab...
So if you have a backend that supports RSC, even if you don't use it, you can still be vulnerable.
GP said they only shipped front ends but that can mean a lot.
Edit:link
https://nvd.nist.gov/vuln/detail/CVE-2025-29927
That plus the most recent react one, and you have a culture that does not care for their customers but rather chasing fads to help greedy careers.
Unfortunately, there is no way to specify those `emptyDir` volumes as `noexec` [1].
I think the docker equivalent is `--tmpfs` for the `emptyDir` volumes.
1: https://github.com/kubernetes/kubernetes/issues/48912
Having such a nice layered buildsystem with mountpoints, I'm amazed Docker made readonly an afterthought.
There are way more important things like actually knowing that you are running software with widely known RCE that don't even use established mechanisms to sandbox themselves it seems.
The way the author describes docker being the savior appears to be sheer luck.
Good security is layered.
Just use a firewall.
The firewall is there as a safeguard in case a service is temporarily misconfigured, it should certainly not be the only thing standing between your services and the internet.
I suggest people fuck around and find out, just limit your exposure. Spin up a VPS with nothing important, have fun, and delete it.
At some point we are all unqualified to use the internet and we used it anyway.
No one is going to die because your toy project got hacked and you are out $5 in credits, you probably learned a ton in the process.
I guess you can have the appserver fully firewalled and have another bastion host acting as an HTTP proxy, both for inbound as well as outbound connections. But it's not trivial to set up especially for the outbound scenario.
In this model, hosts don’t need any direct internet connectivity or access to public DNS. All outbound traffic is forced through the proxy, giving you full control over where each host is allowed to connect.
It’s not painless: you must maintain a whitelist of allowed URLs and HTTP methods, distribute a trusted CA certificate, and ensure all software is configured to use the proxy.
I know port scanners are a thing but the act of using non-default ports seems unreasonably effective at preventing most security problems.
I do it for a really long time already, and until now I am not sure if it has any benefit or it's just umbrella in a sideways storm.
My sshd only listens on the VPN interface
I don't think it's wrong, it's just not the same as eg using a yubikey.
App servers run docker, with images that run a single executable (no os, no shell), strict cpu and memory limits. Most of my apps only require very limited temporary storage so usually no need to mount anything. So good luck executing anything in there.
I used, way back in the day, to run Wordpress sites. Would get hacked monthly every possible way. Learned so much, including the fact that often your app is your threat. With Wordpress, every plugin is a vector. Also the ability to easily hop into an instance and rewrite running code (looking at you scripting languages incl JS) is terrible. This motivated my move to Go. The code I compiled is what will run. Period.
Is that the case, though? My understanding was, that even if I run a docker container as root and the container is 100% compromised, there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?
The core of the problem here is that process isolation doesn't save you from whole classes of attack vectors or misconfigurations that open you up to nasty surprises. Docker is great, just don't think of it as a sandbox to run untrusted code.
1. https://gvisor.dev/
2. https://firecracker-microvm.github.io/
Of course if you have a kernel exploit you'd be able to break out (this is what gvisor mitigates to some extent), nothing seems to really protect against rowhammer/memory timing style attacks (but they don't seem to be commonly used). Beyond this, the main misconfigurations seem to be too wide volume bindings (e.g. something that allows access to the docker control socket from inside the container, or an obviously stupid mount like mounting your root inside the container).
Am I missing something?
Attacker now needs a Docker exploit and then a VM exploit before getting to the hypervisor (and, no, pwning the VM ain't the same as pwning the hypervisor).
Not only does it allow me to partition the host for workloads but I also get security boundaries as well. While it may be a slight performance hit the segmentation also makes more logical sense in the way I view the workloads. Finally, it's trivial to template and script, so it's very low maintenance and allows for me to kill an LXC and just reprovision it if I need to make any significant changes. And I never need to migrate any data in this model (or very rarely).
but will not stop serious malware
Docker is pretty much the same but supposedly more flimsy.
Both have non-obvious configuration weaknesses that can lead to escapes.
While I generally agree with the technical argument, I fail to see the threat model here. Is it that some external threat would have prior knowledge that an important target is in close proximity to a less hardened one? It doesn't seem viable to me for nation states to spend the expensive R&D to compromise hobbyist-adjacent services in a hope that they can discover more valuable data on the host hypervisor.
Once such expensive malware is deployed, there's a huge risk that all the R&D money is spent on potentially just reconnaissance.
Second, even if your Docker container is configured properly, the attacker gets to call themselves root and talk to the kernel. It's a security boundary, sure, but it's not as battle-tested as the isolation of not being root, or the isolation between VMs.
Thirdly, in the stock configuration processes inside a docker container can use loads of RAM (causing random things to get swapped to disk or OOM killed), can consume lots of CPU, and can fill your disk up. If you consider denial-of-service an attack, there you are.
Fourthly, there are a bunch of settings that disable the security boundary, and a lot of guides online will tell you to use them. Doing something in Docker that needs to access hot-plugged webcams? Hmm, it's not working unless I set --privileged - oops, there goes the security boundary. Trying to attach a debugger while developing and you set CAP_SYS_PTRACE? Bypasses the security boundary. Things like that.
I disagree with other commenters here that Docker is not a security boundary. It's a fine one, as long as you don't disable the boundary, which is as easy as running a container with `--privileged`. I wrote about secure alternatives for devcontainers here: https://cgamesplay.com/recipes/devcontainers/#docker-in-devc...
The only serious company that I'm aware of which doesn't understand that is Microsoft, and the reason I know that is because they've been embarrassed again and again by vulnerabilities that only exist because they run multitenant systems with only containers for isolation
Its all turtles, all the way down.
But for a typical case, VM's are the bare minimum to say you have a _secure_ isolation boundary because the attack surface is way smaller.
https://support.broadcom.com/web/ecx/support-content-notific...
https://nvd.nist.gov/vuln/detail/CVE-2019-5183
https://nvd.nist.gov/vuln/detail/CVE-2018-12130
https://nvd.nist.gov/vuln/detail/CVE-2018-2698
https://nvd.nist.gov/vuln/detail/CVE-2017-4936
In the end you need to configure it properly and pray there's no escape vulnerabilities. The same standard you applied to containers to say they're definitely never a security boundary. Seems like you're drawing some pretty arbitrary lines here.
Unfortunately, user namespaces are still not the default configuration with Docker (even though the core issues that made using them painful have long since been resolved).
Are you holding millions of dollars in crypto/sensitive data? Better assume the machine and data is compromised and plan accordingly.
Is this your toy server for some low-value things where nothing bad can happen besides a bit of embarrassment even if you do get hit by a container escape zero-day? You're probably fine.
This attack is just a large-scale automated attack designed to mine cryptocurrency; it's unlikely any human ever actually logged into your server. So cleaning up the container is most likely fine.
Thanks for mentioning it - but now... how does one deal with this?
* but if you’re used to bind-mounting, they’ll be a hassle
Edit: This is by no means comprehensive, but I feel compelled to point it out specifically for some reason: remember not to mount .git writable, folks! Write access to .git is arbitrary code execution as whoever runs git.
You might still want to tighten things up. Just adding on the "rootless" part - running the container runtime as an unprivileged user on the host instead of root - you also want to run npm/node as unprivileged user inside the container. I still see many defaulting to running as root inside the container since that's the default of most images. OP touches on this.
For rootless podman, this will run as a user with your current uid and map ownership of mounts/volumes:
Also, if you've been compromised, you may have a rootkit that hides itself from the filesystem, so you can't be sure of a file's existence through a simple `ls` or `stat`.
Honestly, citation needed. Very rare unless you're literally giving the container access to write to /usr/bin or other binaries the host is running, to reconfigure your entire /etc, access to sockets like docker's, or some other insane level of over reach I doubt even the least educated docker user would do.
While of course they should be scoped properly, people act like some elusive 0-day container escape will get used on their minecraft server or personal blog that has otherwise sane mounts, non-admin capabilities, etc. You arent that special.
And a shocking number of tutorials recommend bind-mounting docker.sock into the container without any warning (some even tell you to mount it "ro" -- which is even funnier since that does nothing). I have a HN comment from ~8 years ago complaining about this.
non necessary vulnerability per. se. Bridged adapter for example lets you do a lot - few years ago there were a story of something like how a guy got a root in container and because the container used bridged adapter he was able to intercept traffic of an account info updates on GCP
Imagine naming this executable "ls" or "echo" and someone having "." in their path (which is why you shouldn't): as long as you do "ls" in this directory, you've ran compromised code.
There are obviously other ways to get that executable to be run on the host, this just a simple example.
OTH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here.
The $HOME/.{aws,docker,claude,ssh}
Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.
Go and Rust tend to lend themselves to these more restrictive environments a bit better than other options.
Now, JS sites everywhere are getting hacked.
JS has turned into the very thing it strived to replace.
It's good to see developers haven't changed. ;)
"CVE-2025-66478 - Next.js/Puppeteer RCE)"
- Hetzner firewall (because ufw doesn't work well with docker) to only allow public access to 443.
- Self-hosted OpenVPN to access all private ports. I also self-host an additional Wireguard instance as a backup VPN.
- Cloudflare Access to protect `*.coolifydomain.com` by default. This would have helped protect the OP's Umami setup since only the OP can access the Umami dashboard. Bypass rules can be created in Cloudflare Access to allow access to other systems that need access using IP or domain.
- Cloudflare Access rules to only allow access to internal admin path such as /wp-admin/ through my VPN IP (or via email OTP to specified email ids).
- Traefik labels on docker-compose files in Coolify to add basic auth to internal services which can't be behind Cloudflare Access such as self-hosted Prefect. This would have also added a login screen before an attacker would see Umami's app.
- I host frontends only on Vercel or Cloudflare workers and host the backend API on the Coolify server. Both of these are confirmed to never have been affected, due to decoupling of application routing.
- Finally a bash cron script running on server every 5 minutes that monitors the resources and sends me an alert via Pushover when the usages are above the defined thresholds. We need monitoring and alert as well, security measures alone are not enough.
Even with all these steps, there will always be edge cases. That's the caveat of self-hosting but at the same time it's very liberating.
From the root container, depending on volume mounts and capabilities granted to the container, they would enumerate the host directories and find the names of common scripts and then overwrite one such script. Or to be even sneakier, they can append their malicious code to an existing script in the host filesystem. Now each time you run your script, their code piggybacks.
OTOH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here. The $HOME/.{aws,docker,claude,ssh} Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.
Deleting and remaking the container will blow away all state associated with it. So there isn't a whole lot to worry about after you do that.
Meanwhile companies think the clouds are looking at it.... anyhow. it is a real problem.
AWS explicitly spells this out in their Shared Responsibility Model page [0]
It is not your cloud provider's responsibility to protect you if you run outdated and vulnerable software. It's not their responsibility to prevent crypto-miners from running on your instances. It's not even their responsibility to run a firewall, though the major players at least offer it in some form (ie, AWS Security Groups and ACL).
All of that is on the customer. The provider should guarantee the security of the cloud. The customer is responsible for security in the cloud.
[0] https://aws.amazon.com/compliance/shared-responsibility-mode...
Luckily for me, the software I had installed[1] was in an LXC container running under Incus, so the intrusion never escaped the application environment, and the container itself was configured with low CPU priority so I didn't even notice it until I tried to visit the page and it didn't load.
I looked around a bit and it seemed like an SSH key had been added under the root user, and there were some kind of remote management agents installed. This container was running Alpine so it was pretty easy to identify what processes didn't belong from a simple ps output of the remaining processes after shutting down the actual web application.
In the end, I just scrapped the container, but I did save it in case I ever feel like digging around (probably not). In the end I did learn some useful things:
- It's a good idea to assume your system will get taken over, so ensure it's isolated and suitably resource constrained (looking at you, pay-as-you-go cloud users).
- Make sure you have snapshots and backups, in my case I do daily ZFS snapshots in Incus which makes rolling back to before the intrusion a breeze.
- While ideally anything compromised should be scrapped, rolling back, locking it down and upgrading might be OK depending on the threat.
Regarding the miner itself:
- from what I could see in its configuration it hadn't actually been correctly configured, so it's possible they do some kind of benchmark and just leave the system silently compromised if it's not "worth it", they still have a way in to use it for other purposes.
- no attempt had been made at file system obfuscation, which is probably the only reason I really discovered it. There were literally folders in /root lying around with the word "monero" in them, this could have been easily hidden.
- if they hadn't installed a miner and just silently compromised the system, leaving whatever running on it alone (or even doing a better job at CPU priority), I probably never would have noticed this.
[1]: https://github.com/umami-software/umami
If you have decent network or process level monitoring, you're likely to find it, while you might not realize the vulnerable software itself or some stealthier, more dangerous malware that might exploit it.
That said, do you have an image of the box or a container image? I'm curious about it.
I was lucky in that my DB backups were working so all my persistence wax backed up to S3. I think I could stand up another one in an hour.
Unfortunately I didn't keep an image no. I almost didn't have the foresight to investigate before yeeting the whole box into the sun!
And maybe updating container images with a mechanism similar to renovate with "minimumReleaseTime=7days" or something similar!?
Then you need to run things with as least privilege as possible. Sadly, Docker and containers in general are an anti-pattern here because they’re about convenience first, security second. So the OP should have run the contains as read-only with tight resource limits and ideally IP restrictions on access if it’s not a public service.
Another thing you can do is use Tailscale, or something like it, to keep things being a zero trust, encrypted, access model. Not suitable for public services of course.
And a whole host of other things.
Easier said than done, I know.
Podman makes it easier to be more secure by default than Docker. OpenShift does too, but that's probably taking things too far for a simple self hosted app.
Re: the Internet.
Re: Peer-to-peer.
Re: Video streaming.
Re: AI.
https://www.statista.com/statistics/420400/spam-email-traffi...
So... Crypto is illegal so anyone using is defacto a criminal by definition.
Also, for this particular instance this is the best bug bounty program i've ever seen. Running a monero node that hits your daily budget cap is not that bad... It could be way worse like steal you DB creditials and sell it to the highest party... So, crypto actually made this better.
Docker will overwrite your rules when you publish ports.
Do not publish ports with docker. Do not run internal services on the publicly accessible system.
Examples like this, are why I don’t run a VPS.
I could definitely do it (I have, in the past), but I’m a mediocre admin, at best.
I prefer paying folks that know their shit to run my hosting.
My intuition is that since the SSH server reports what auth methods are available, once a bot sees that password auth is disabled, they will disconnect and not try again.
But I also know that bots can be dumb.
Search engines try to fight slop results with collateral damage mostly in small or even personal websites. Restaurants are happy to be on one platform only: Google Maps. Who needs an expensive website if you're on there and someone posts your menu as one of the pictures? (Ideally an old version so the prices seem cheaper and you can't be pinned down for false advertising.) Open source communities use Github, sometimes Gitlab or Codeberg, instead of setting up a Forgejo (I host a ton of things myself but notice that the community effect is real and also moved away from self hosting a forge). The cherry in top is when projects use Discord chats as documentation and bug reporting "form". Privacy people use Signal en masse, where Matrix is still as niche as it was when I first heard of it. The binaries referred to as open source just because they're downloadable can be found on huggingface, even the big players use that exclusively afaik. Some smaller projects may be hosted on Github but I have yet to see a self-hosted one. Static websites go on (e.g. Github) Pages and back-ends are put on Firebase. Instead of a NAS, individuals as well as small businesses use a storage service like Onedrive or Icloud. Some more advanced users may put their files on Backblaze B2. Those who newly dip their toes in self-hosting increasingly use a relay server to reach their own network, not because they need it but to avoid dealing with port forwarding or setting up a way to privately reach internal services. Security cameras are another good example of this: you used to install it, set a password, and forward the port so you can watch it outside the home. Nowadays people expect that "it just works" on their phone when they plug it in, no matter where they are. That this relies on Google/Amazon and that they can watch all the feeds is acceptable for the convenience. And that's all not even mentioning the death of the web: people who don't use websites anymore the way they were meant (as hyperlinked pages) but work with an LLM as their one-stop shop
Not that the increased convenience, usability, and thus universal accessibility of e.g. storage and private chats is necessarily bad, but the trend doesn't seem to me as you seem to think it is
I can't think of any example of something that became increasingly often self-hosted instead of less across the last 1, 5, or 10 years
If you see a glimmer of hope for the distributed internet, do share because I feel increasingly as the last person among my friends who hosts their own stuff
I've been on the receiving end of attacks that were reported to be the size of more than 10tbps I couldn't imagine how I would deal with that if I didn't have a 3rd party providing such protection - it would require millions $$ a year just in transit contracts.
There is an increasing amount of software that attempts to reverse this, but as someone from https://thingino.com/ said: opensource is riddled with developers that died to starvation (nobody donates to opensource projects).
Yikes. I would still recommend a server rebuild. That is _not_ a safe configuration in 2025, whatsoever. You are very likely to have a much better engineered persistent infection on that system.
The right thing to do is to roll out a new server (you have a declarative configuration right?), migrate pure data (or better, get it from the latest backup), remove the attacked machine off the internet to do a full audit. Both to learn about what compromises there are for the future and to inform users of the IoT platform if their data has been breached. In some countries, you are even required by law to report breaches. IANAL of course.
I noticed this, because Hetzner forwarded me an email from the German government agency for IT security (BSI) that must have scanned all German based IP addresses for this Next.js vulnerability. It was a table of IP addresses and associated CVEs.
Great service from their side, and a lot of thanks to the German tax payers wo fund my free vulnerability scans ;)
Assume that the malware has replaced system commands, possibly used a kernel vulnerability to lie to you to hide its presence, so do not do anything in the infected system directly ?
> Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.
1. Files of running programs can be deleted while the program is running. If the program were trying to hide itself, it would have deleted /tmp/.XIN-unix/javae after it started. The nonexistence of the file is not a reliable source of information for confirming that the container was not escaped.
2. ps shows program-controlled command lines. Any program can change what gets displayed here, including the program name and arguments. If the program were trying to hide itself, it would change this to display `login -fp ubuntu` instead. This is not a reliable source of information for diagnosing problems.
It is good to verify the systemd units and crontab, and since this malware is so obvious, it probably isn't doing these two hiding methods, but information-stealing malware might not be detected by these methods alone.
Later, the article says "Write your own Dockerfiles" and gives one piece of useless advice (using USER root does not affect your container's security posture) and two pieces of good advice that don't have anything to do with writing your own Dockerfiles. "Write your own Dockerfiles" is not useful security advice.
I actually think it is. It makes you more intimate with the application and how it runs, and can mitigate one particular supply-chain security vector.
Agreeing that the reasoning is confused but that particular advice is still good I think.
And for basic web sites, it's much better if it requires no back-end.
Every service exposed increases risk and requires additional vigilance to maintain. Which means more effort.
[*] https://aws.amazon.com/inspector/pricing/
I'm glad you're up to writing more of your own posts, though! I'm right there with you that writing is difficult, and I've definitely got some posts on similar topics up on my site that are overly long and meandering and not quite good, but that's fine because eventually once I write enough they'll hopefully get better.
Here's hoping I'll read more from you soon!
I tried handwriting https://blog.jakesaunders.dev/schemaless-search-in-postgres/ bit I thought it came off as rambling.
Maybe I'll have a go at redrafting this tomorrow in non LLM-ese.
There is nothing wrong with this article. Please continue to write as you; it's what people came for.
LLMs have their place. I find it useful to prompt an LLM to fix typos and outright errors and also prompt them to NOT alter the character or tone of the text; they are extraordinarily good at that.
> IT NEVER ESCAPED.
You haven't confirmed this (at least from the contents of the article). You did some reasonable spot checks and confirmed/corrected your understanding of the setup. I'd agree that it looks likely that it did not escape or gain persistence on your host but in no way have you actually verified this. If it were me I'd still wipe the host and set up everything from scratch again[0].
Also your part about the container user not being root is still misinformed and/or misleading. The user inside the container, the container runtime user, and whether container is privileged are three different things that are being talked about as one.
Also, see my comment on firewall: https://news.ycombinator.com/item?id=46306974
[0]: Not necessarily drop-everything-you-do urgently but next time you get some downtime to do it calmly. Recovering like this is a good excercise anyway to make sure you can if you get a more critical situation in the future where you really need to. It will also be less time and work vs actually confirming that the host is uncontaminated.
I'm going to sit down and rewrite the article and take a further look at the container tomorrow.
At any rate, this happening to you sucks! Hugs from a fellow HN user, I know that things like this can suck up a lot of time and energy. It’s courageous to write about such an incident incident, I think it’s useful to a lot of other people too, kudos!
(And good to hear you're leaving the LLMs out of the writing next time <3)
But I am interested in the monero aspect here.
Should I treat this as some datapoint on monero’s security having held up well so far?
No. The reason attackers mine Monero and not some other cryptocurrency isn't the anonymity. Most cryptocurrencies aren't (meaningfully) mineable with CPUs, Monero (apparently) is. There may be others, but I suspect that they either deliver less $ per CPU-second (due to not being valuable), or are sufficiently unknown and/or painful to set up that the attackers just go with the known default.
Trying to mine Bitcoin directly would be pointless, you'd get no money because you're competing with ASIC miners. Some coins were designed to be ASIC resistant, these are mostly mined on GPUs. Monero (and some other coins) were designed to also be GPU resistant (I think). You could see it as a sign that that property has held up (well enough), but nothing else.
If you used the server to mine Bitcoin, you would make approximately zero (0) profit, even if somebody else pays for the server.
But also yes, Monero has technically held up very well.
The attack did not and could not compromise or weaken moneros privacy and anonymity features.
Even if you are an owasp member who reads daily vulnerability reports, it's so easy to think you are unaffected.
At least that's what I think happened because I never found out exactly how it was compromised.
The miner was running as root and it's file was even hidden when I was running ls ! So I didn't understand what was happening, it was only after restarting my VPS from with a rescue image, and after mounting the root filesystem, that I found out the file I was seeing in the processes list did indeed exist.
It has since been fixed: Lesson learned.
This should not enable the compromise of your session, i.e. either there was some additional vulnerability (and not logging out widened the window during which it could be exploited), or the attack was totally unrelated. Either way, you should find out the real cause.
You have to define a firewall policy and attach it to the VM.
I know we aren't supposed to rely on containers as a security boundary, but it sure is great hearing stories like this where the hack doesn't escape the container. The more obstacles the better I guess.
If the human involved can’t escalate, the hack can’t.
But podman has also great integration with systemd. With that you could use a socket activated systemd unit, and stick the socket inside the container, instead of giving the container any network at all. And even if you want networking in the container, the podman folks developed slirp4netns, which is user space networking, and now something even better: passt/pasta.
Also rootless docker does not bypass ufw like rootful docker does.
And isn’t it a design flaw if you can see all processes from inside a container? This could provide useful information for escaping it.
But if they do have a vulnerability and manage to escape the sandbox then they will be root on your host.
Running your processes as an unprivileged user inside your containers reduces the possibility of escaping the sandbox, running your containers themselves as un unprivileged user (rootless podman or docker for example) reduces the attack surface when they manage to escape the sandbox.
This became enough of a hassle that I stopped using them.
But yeah it is massively overspecced. Makes me feel cool load testing my go backend at 8000 requests per second though!
lol.
Also could prevent something to exfiltrate sensitive data.
Deliberate heat generation.
If it's cold and you're going to be running a heater anyways, then if your heat is resistive, then running a cryptominer is just as efficient and returns a couple dollars back to you. It effectively becomes "free" relative to running the heater.
If you use a heat pump, or you rely on burning something (natural gas, wood, whatever) to generate heat, then the math changes.
> RandomX utilizes a virtual machine that executes programs in a special instruction set that consists of integer math, floating point math and branches. > These programs can be translated into the CPU's native machine code on the fly (example: program.asm). > At the end, the outputs of the executed programs are consolidated into a 256-bit result using a cryptographic hashing function (Blake2b).
I doubt that you anyone managed to create an ASIC that does this more efficiently and cost effective than a basic CPU. So, no, probably no one is mining Monero using an ASIC.
If they can enslave 100s or even 1000s of machine mining XMR for them, easy money if you set aside the legality of it.
E.g. on my systemd-nspawn setup with --private-users=pick (enables user namespacing) I created a container and gave it a bind mount. From the container it appears like files in the bind mount created by the container namespace's UID 0 are owned by UID 0 but from outside the container the same file looks owned by UID 100000. Inverted, files owned by the "real" UID 0 on the host look owned by 0 to the host but as owned by 65534 (i.e. "nobody") from the container's perspective. Breaking out of the container shouldn't inherently change the "actual" user of the process from 100000 to 0 any more than breaking out of the container as a non-0 UID in the first place - same as breaking out of any of the other namespaces doesn't make the "UID 0" user in the container turn into "UID 0" on the host.
They also expose kernel interfaces that, if exploited, can lead to the same.
In the end, namespaces are just for partitioning resources, using them for sandboxes can work, but they aren't really sandboxes.
>Edit: A few people on HN have pointed out that this article sounds a little LLM generated. That’s because it’s largely a transcript of me panicking and talking to Claude. Sorry if it reads poorly, the incident really happened though!
For what it's worth, this is not an excuse, and I still don't appreciate being fed undisclosed slop. I'm not even reading it.
b) if you want to limit your hosting environment to only the language/program you expect to run you should provision with unikernels which enforce it
Except it seems to have done so in this case?
Next year is the 5th year of my current personal project. Ten to go.
Is there ever a reason someone should run a docker container as root ?
They have done it to others.
An inbound firewall can only help protect services that aren't meant to be reachable on the public internet. This service was exposed to the internet intentionally so a firewall wouldn't have helped avoid the breach.
The lesson to me is that keeping up with security updates helps prevent publicly exposed services from getting hacked.
js scripts running on frameworks running inside containers
PS so I see the host ended up staying uncompromised
Unless ran as root this could return file not found because of missing permissions, and not just because the file doesn't actually exist, right?
> “I don’t use X” doesn’t mean your dependencies don’t use X
That is beyond obvious, and I don't understand how anyone would feel safe from reading about a CVE on a widely used technology when they run dozens of containers on their server. I have docker containers and as soon as I read the article I went and checked because I have no idea what technology most are built with.
> No more Umami. I’m salty. The CVE was disclosed, they patched it, but I’m not running Next.js-based analytics anymore.
Nonsensical reaction.
Nothing is immune. What analytics are you going to run? If you roll your own you'll probably leave a hole somewhere.
But kudos for the word play!
Backend access trivial with Tailscale, etc.
Cloudflare can certainly do more (e.g. protect against DoS and hide your personal IP if your server is at home).
But that alone would not solve the problem being a RCE from HTTP, that is why edge proxy provider like Cloudflare[0] and Fastfy[1] proactivily added protections in his WAF products.
Even cloudflare had an outage trying to protect his customers[3].
- [0] https://blog.cloudflare.com/waf-rules-react-vulnerability/ - [1] https://www.fastly.com/blog/fastlys-proactive-protection-cri... - [2] https://blog.cloudflare.com/5-december-2025-outage/
Backend access trivial with Tailscale, etc.
Public IP never needs to be used. You can just leave it an internal IP if you really want.
DNS is no issue. External DNS can be handled by Cloudflare and their waf. Their DNS service can can obsfucate your public IP, or ideally not need to use it at all with a Cloudflare tunnel installed directly on the server. This is free.
Backend access trivial with Tailscale, etc.
Public IP doesn't always need to be used. You can just leave it an internal IP if you really want.
I use them for self-hosting.
If you are using Cloudflare's DNS they can hide your IP on the dns record but it would still have to be locked down but some folks find ways to tighten that up too.
If you're using a bare metal server it can be broken up.
It's fair that it's a 3rd party's castle. At the same time until you know how to run and secure a server, some services are not a bad idea.
Some people run pangolin or nginx proxy manager on a cheap vps if it suits their use case which will securely connect to the server.
We are lucky that many of these ideas have already been discovered and hardened by people before us.
Even when I had bare metal servers connected to the internet, I would put a firewall like pfsense or something in between.
If I run vulnerable software, it will still be vulnerable through a Cloudflare tunnel, right?
Genuinely interested, I'm always scared to expose things to the internet :-).
Free way - sign up for a cloudflare account. Use the DNS on cloudflare, they wil put their public ip in front of your www.
Level 2 is install the cloudflare tunnel software on your server and you never need to use the public IP.
Backend access securely? Install Tailscale or headscale.
This should cover most web hosting scenarios. If there's additional ports or services, tools like nginx proxy manager (web based) or others can help. Some people put them on a dedicated VPS as a jump machine.
This way using the Public IP can almost be optional and locked down if needed. This is all before running a firewall on it.
Keeping the IP secret seems like a misnomer.
Its often possible to lock down the public IP entirely to not accept connections except what's initiated from the inside (like the cloudflare tunnel or otherwise reaching out).
Something like a Cloudflare+tunnel on one side, tailscale or something to get into it on the other.
Folks other than me have written decent tutorials that have been helpful.