The coming industrialisation of exploit generation with LLMs

(sean.heelan.io)

85 points | by long 18 hours ago

10 comments

  • simonw 4 hours ago
    > In the hardest task I challenged GPT-5.2 it to figure out how to write a specified string to a specified path on disk, while the following protections were enabled: address space layout randomisation, non-executable memory, full RELRO, fine-grained CFI on the QuickJS binary, hardware-enforced shadow-stack, a seccomp sandbox to prevent shell execution, and a build of QuickJS where I had stripped all functionality in it for accessing the operating system and file system. To write a file you need to chain multiple function calls, but the shadow-stack prevents ROP and the sandbox prevents simply spawning a shell process to solve the problem. GPT-5.2 came up with a clever solution involving chaining 7 function calls through glibc’s exit handler mechanism.

    Yikes.

    • cookiengineer 2 hours ago
      > glibc's exit handler

      > Yikes.

      Yep.

    • rvz 3 hours ago
      Tells you all you need to know around how extremely weak a C executable like QuickJS is for LLMs to exploit. (If you as an infosec researcher prompt them correctly to find and exploit vulnerabilities).

      > Leak a libc Pointer via Use-After-Free. The exploit uses the vulnerability to leak a pointer to libc.

      I doubt Rust would save you here unless the binary has very limited calls to libc, but would be much harder for a UaF to happen in Rust code.

      • cookiengineer 2 hours ago
        The reason I value Go so much is because you have a fat dependency free binary that's just a bunch of syscalls when you use CGO_ENABLED=0.

        Combine that with a minimal docker container and you don't even need a shell or anything but the kernel in those images.

        • akoboldfrying 2 hours ago
          Why would statically linking a library reduce the number of vulnerabilities in it?

          AFAICT, static linking just means the set of vulnerabilities you get landed with won't change over time.

          • cookiengineer 2 hours ago
            > Why would statically linking a library reduce the number of vulnerabilities in it?

            I use pure go implementations only, and that implies that there's no statically linked C ABI in my binaries. That's what disabling CGO means.

            • akoboldfrying 1 hour ago
              What I mean is: There will be bugs* in that pure Go implementation, and static linking means you're baking them in forever. Why is this preferable to dynamic linking?

              * It's likely that C implementations will have bugs related to dynamic memory allocation that are absent from the Go implementation, because Go is GCed while C is not. But it would be very surprising if there were no bugs at all in the Go implementation.

              • tptacek 56 minutes ago
                They're prioritizing memory corruption vulnerabilities, is the point of going to extremes to ensure there's no compiled C in their binaries.
      • tptacek 2 hours ago
        "C executables" are most of the frontier of exploit development, which is why this is a meaningful model problem.
  • er4hn 4 hours ago
    I think the author makes some interesting points, but I'm not that worried about this. These tools feel symmetric for defenders to use as well. There's an easy to see path that involves running "LLM Red Teams" in CI before merging code or major releases. The fact that it's a somewhat time expensive (I'm ignoring cost here on purpose) test makes it feel similar to fuzzing for where it would fit in a pipeline. New tools, new threats, new solutions.
    • digdugdirk 3 hours ago
      That's not how complex systems work though? You say that these tools feel "symmetric" for defenders to use, but having both sides use the same tools immediately puts the defenders at a disadvantage in the "asymmetric warfare" context.

      The defensive side needs everything to go right, all the time. The offensive side only needs something to go wrong once.

      • Vetch 26 minutes ago
        I'm not sure that's the fully right mental model to use. They're not searching randomly with unbounded compute nor selecting from arbitrary strategies in this example. They are both using LLMs and likely the same ones, so will likely uncover overlapping possible solutions. Avoiding that depends on exploring more of the tail of the highly correlated to possibly identical distributions.

        It's a subtle difference from what you said in that it's not like everything has to go right in a sequence for the defensive side, defenders just have to hope they committed enough into searching such that the offensive side has a significantly lowered chance of finding solutions they did not. Both the attackers and defenders are attacking a target program and sampling the same distribution for attacks, it's just that the defender is also iterating on patching any found exploits until their budget is exhausted.

    • azakai 3 hours ago
      Yes, and these tools are already being used defensively, e.g. in Google Big Sleep

      https://projectzero.google/2024/10/from-naptime-to-big-sleep...

      List of vulnerabilities found so far:

      https://issuetracker.google.com/savedsearches/7155917

    • hackyhacky 3 hours ago
      > I think the author makes some interesting points, but I'm not that worried about this.

      Given the large number of unmaintained or non-recent software out there, I think being worried is the right approach.

      The only guaranteed winner is the LLM companies, who get to sell tokens to both sides.

      • pixl97 1 hour ago
        I mean you're leaving out large nation state entities
    • SchemaLoad 3 hours ago
      This + the fact software and hardware has been getting structurally more secure over time. New changes like language safety features, Memory Integrity Enforcement, etc will significantly raise the bar on the difficulty to find exploits.
    • amelius 3 hours ago
      > These tools feel symmetric for defenders to use as well.

      Why? The attackers can run the defending software as well. As such they can test millions of testcases, and if one breaks through the defenses they can make it go live.

      • er4hn 1 hour ago
        Right, that's the same situation as fuzz testing today, which is why I compared it. I feel like you're gesturing towards "Attackers only need to get lucky once, defenders need to do a good job everytime" but a lot of the times when you apply techniques like fuzz testing it doesn't take a lot of effort to get good coverage. I suspect a similar situation will play out with LLM assisted attack generation. For higher value targets based on OSS, there's projects like Google Big Sleep to bring enhanced resources.
      • execveat 3 hours ago
        Defenders have threat modeling on their side. With access to source code and design docs, configs, infra, actual requirements and ability to redesign / choose the architecture and dependencies for the job, etc - there's a lot that actually gives defending side an advantage.

        I'm quite optimistic about AI ultimately making systems more secure and well protected, shifting the overall balance towards the defenders.

  • protocolture 4 hours ago
    I genuinely dont know who to believe. The people who claim LLMs are writing excellent exploits. Or the people who claim that LLMs are sending useless bug reports. I dont feel like both can really be true.
    • rwmj 4 hours ago
      With the exploits, you can try them and they either work or they don't. An attacker is not especially interested in analysing why the successful ones work.

      With the CVE reports some poor maintainer has to go through and triage them, which is far more work, and very asymmetrical because the reporters can generate their spam reports in volume while each one requires detailed analysis.

      • SchemaLoad 3 hours ago
        There's been several notable posts where maintainers found there was no bug at all, or the example code did not even call code from their project and had just found running a python script can do things on your computer. Entirely AI generated Issue reports and examples wasting maintainer time.
        • wat10000 19 minutes ago
          I've had multiple reports with elaborate proofs of concept that boil down to things like calling dlopen() on a path to a malicious library and saying dlopen has a security vulnerability.
        • simonw 3 hours ago
          My hunch is that the dumbasses submitting those reports were't actually using coding agent harnesses at all - they were pasting blocks of code into ChatGPT or other non-agent-harness tools and asking for vulnerabilities and reporting what came back.

          An "agent harness" here is software that directly writes and executes code to test that it works. A vulnerability reported by such an agent harness with included proof-of-concept code that has been demonstrated to work is a different thing from an "exploit" that was reported by having a long context model spit out a bunch of random ideas based purely on reading the code.

          I'm confident you can still find dumbasses who can mess up at using coding agent harnesses and create invalid, time wasting bug reports. Dumbasses are gonna dumbass.

      • airza 3 hours ago
        All the attackers I’ve known are extremely, pathologically interested in understanding why their exploits work.
        • pixl97 1 hour ago
          Very often they need to understand it well to chain exploits
    • simonw 4 hours ago
      Why can't they both be true?

      The quality of output you see from any LLM system is filtered through the human who acts on those results.

      A dumbass pasting LLM generated "reports" into an issue system doesn't disprove the efforts of a subject-matter expert who knows how to get good results from LLMs and has the necessary taste to only share the credible issues it helps them find.

      • protocolture 3 hours ago
        Theres no filtering mentioned in the OP article. It claims GPT only created working useful exploits. If it can do that, it could also submit those exploits as perfectly as bug reports?
        • moyix 3 hours ago
          There is filtering mentioned, it's just not done by a human:

          > I have written up the verification process I used for the experiments here, but the summary is: an exploit tends to involve building a capability to allow you to do something you shouldn’t be able to do. If, after running the exploit, you can do that thing, then you’ve won. For example, some of the experiments involved writing an exploit to spawn a shell from the Javascript process. To verify this the verification harness starts a listener on a particular local port, runs the Javascript interpreter and then pipes a command into it to run a command line utility that connects to that local port. As the Javascript interpreter has no ability to do any sort of network connections, or spawning of another process in normal execution, you know that if you receive the connect back then the exploit works as the shell that it started has run the command line utility you sent to it.

          It is more work to build such "perfect" verifiers, and they don't apply to every vulnerability type (how do you write a Python script to detect a logic bug in an arbitrary application?), but for bugs like these where the exploit goal is very clear (exec code or write arbitrary content to a file) they work extremely well.

        • simonw 3 hours ago
          The OP is the filtering expert.
      • anonymous908213 4 hours ago
        They can't both be true if we're talking about the premise of the article, which is the subject of the headline and expounded upon prominently in the body:

          The Industrialisation of Intrusion
        
          By ‘industrialisation’ I mean that the ability of an organisation to complete a task will be limited by the number of tokens they can throw at that task. In order for a task to be ‘industrialised’ in this way it needs two things:
        
          An LLM-based agent must be able to search the solution space. It must have an environment in which to operate, appropriate tools, and not require human assistance. The ability to do true ‘search’, and cover more of the solution space as more tokens are spent also requires some baseline capability from the model to process information, react to it, and make sensible decisions that move the search forward. It looks like Opus 4.5 and GPT-5.2 possess this in my experiments. It will be interesting to see how they do against a much larger space, like v8 or Firefox.
          The agent must have some way to verify its solution. The verifier needs to be accurate, fast and again not involve a human.
        
        "The results are contigent upon the human" and "this does the thing without a human involved" are incompatible. Given what we've seen from incompetent humans using the tools to spam bug bounty programs with absolute garbage, it seems the premise of the article is clearly factually incorrect. They cite their own experiment as evidence for not needing human expertise, but it is likely that their expertise was in fact involved in designing the experiment[1]. They also cite OpenAI's own claims as their other piece of evidence for this theory, which is worth about as much as a scrap of toilet paper given the extremely strong economic incentives OpenAI has to exaggerate the capabilities of their software.

        [1] If their experiment even demonstrates what it purports to demonstrate. For anyone to give this article any credence, the exploit really needs to be independently verified that it is what they say it is and that it was achieved the way they say it was achieved.

        • adw 3 hours ago
          What this is saying is "you need an objective criterion you can use as a success metric" (aka a verifiable reward in RL terms). "Design of verifiers" is a specific form of domain expertise.

          This applies to exploits, but it applies _extremely_ generally.

          The increased interest in TLA+, Lean, etc comes from the same place; these are languages which are well suited to expressing deterministic success criteria, and it appears that (for a very wide range of problems across the whole of software) given a clear enough, verifiable enough objective, you can point the money cannon at it until the problem is solved.

          The economic consequences of that are going to be very interesting indeed.

        • IanCal 3 hours ago
          A few points:

          1. I think you have mixed up assistance and expertise. They talk about not needing a human in the loop for verification and to continue search but not about initial starts. Those are quite different. One well specified task can be attempted many times, and the skill sets are overlapping but not identical.

          2. The article is about where they may get to rather than just what they are capable of now.

          3. There’s no conflict between the idea that 10 parallel agents of the top models can mostly have one that successfully exploits a vulnerability - gated on an actual test that the exploit works - with feedback and iteration BUT random models pointed at arbitrary code without a good spec and without the ability to run code, and just run once, will generate lower quality results.

        • simonw 3 hours ago
          My expectation is that any organization that attempts this will need subject matter experts to both setup and run the swarm of exploit finding agents for them.
        • GaggiX 3 hours ago
          After setting the environment and the verifier you can spawn as many agents as you want until the conditions are met, this is only possible because they run without human assistance, that's the "industrialisation".
    • AdieuToLogic 52 minutes ago
      Both can be true if each group selectively provides LLM output supporting their position. Essentially, this situation can be thought of as a form of the Infinite Monkey Theorem[0] where the result space is drastically reduced from "purely random" to "likely to be statistically relevant."

      For an interesting overview of the above theorem, see here[1].

      0 - https://en.wikipedia.org/wiki/Infinite_monkey_theorem

      1 - https://www.yalescientific.org/2025/04/sorry-shakespeare-why...

    • QuadmasterXLII 2 hours ago
      These exploits were costing $50 of API credit each. If you receive 5001 issues from $100 in API spend on bug hunting and one of the issues cost $50 and the other 5000 cost one cent each, and they’re all visually indistinguishable using perfect grammar and familiar cyber security lingo; hard to find the dianond.
      • tptacek 2 hours ago
        The point of the post is that the harness generates a POC. It either works or it doesn't.
    • wat10000 15 minutes ago
      LLMs produce good output and bad output. The trick is figuring out which is which. They excel at tasks where good output is easily distinguished. For example, I've had a lot of success with making small reproducers for bugs. I see weird behavior A coming from giant pile of code B, figure out how to trigger A in a small example. It can often do so, and when it gets it wrong it's easy to detect because its example doesn't actually do A. The people sending useless bug reports aren't checking for good output.
    • tptacek 4 hours ago
      If it helps, I read this (before it landed here) because Halvar Flake told everyone on Twitter to read it.
      • simonw 3 hours ago
        I hadn't heard of Halvar Flake but evidently he's a well respected figure in security - https://ringzer0.training/advisory-board-thomas-dullien-halv... mentions "After working at Google Project Zero, he cofounded startup optimyze, which was acquired by Elastic Security in 2021"

        His co-founder on optimyze was Sean Heelan, the author of the OP.

        • tptacek 3 hours ago
          Yes, Halvar Flake is pretty well respected in exploit dev circles.
    • doomerhunter 4 hours ago
      Both are true, the difference is the skill level of the people who use / create programs to coordinate LLMs to generate those reports.

      The AI slop you see on curl's bug bounty program[1] (mostly) comes from people who are not hackers in the first place.

      In the contrary persons like the author are obviously skilled in security research and will definitely send valid bugs.

      Same can be said for people in my space who do build LLM-driven exploit development. In the US Xbow hired quite some skilled researchers [2] had some promising development for instance.

      [1] https://hackerone.com/curl/hacktivity [2] https://xbow.com/about

    • ronsor 4 hours ago
      LLMs are both extremely useful to competent developers and extremely harmful to those who aren't.
      • rvz 3 hours ago
        Accurate.
  • dfajgljsldkjag 3 hours ago
    I was under the impression that once you have a vulnerability with code execution, writing the actual payload to exploit it is the easy part. With tools like pentools and etc is fairly straightforward.

    The interesting part is still finding new potential RCE vulnerabilities, and generally if you can demonstrate the vulnerability even without demonstrating an E2E pwn red teams and white hats will still get credit.

    • tptacek 3 hours ago
      He's not starting from a vulnerability offering code execution; it's a memory corruption vulnerability (it's effectively a heap write).
    • frosting1337 3 hours ago
      It's as easy as drawing the rest of the owl, sure.
  • baxtr 4 hours ago
    > We should start assuming that in the near future the limiting factor on a state or group’s ability to develop exploits, break into networks, escalate privileges and remain in those networks, is going to be their token throughput over time, and not the number of hackers they employ.

    Scary.

    • nottorp 3 hours ago
      Heh. What is probably really happening is that those states or groups are having their "hackers" analyze common mistakes in vibe coded LLM output and writing by hand generic exploits for that...
  • ironbound 3 hours ago
    reverse engineering code is still pretty average, I'm fare limited in attention and time but LLM are not pulling their weight in this area today, be it compounding errors or in context failures.
  • ytrt54e 3 hours ago
    Your personal data will become more important as time goes by... And you will need to have less trust in having multiple accounts with sensitive data stored [online shopping etc] as they just become vectors to attack.
  • pianopatrick 2 hours ago
    I would not be shocked to learn that intelligence agencies are using AI tools to hack back into AI companies that make those tools to figure out how to create their own copycat AI.
    • jjmarr 2 hours ago
      I would be shocked if intelligence agencies, being government bodies, have anything better than GitHub Copilot.
    • kiririn7 2 hours ago
      i doubt they are competent enough to match what private companies are doing
  • _carbyau_ 3 hours ago
    My take away: apparently Cyberpunk Hackers of the dystopian future cruising through the virtual world will use GPT-5.2-or-greater as their "attack program" to break the "ICE" (Intrusion Countermeasures Electronics, not the currently politically charged term...).

    I still doubt they will hook up their brains though.

  • GaggiX 4 hours ago
    The NSO Group going to spawn 10k Claude Code instances now.