AI slop security reports submitted to curl

(gist.github.com)

79 points | by nobody9999 8 hours ago

8 comments

  • Rygian 6 hours ago
    Taking for example the one listed as https://hackerone.com/reports/2871792.

    With the advantage of hindsight, the issue should have been entirely dismissed, and the account reported as invalid, right at the third message (November 30, 2024, 8:58pm UTC); the fact that curl maintainers allowed the "dialog" to continue for six more messages shows to be a mistake and a waste of effort.

    I would even encourage curl maintainers to upfront reject any report that fails to mention a line number in the source code, or a specific piece input that triggers an issue.

    It's unfortunate that AI is being used to worsen the signal/noise ratio [1] of such sensitive topics such as security.

    [1] http://www.meatballwiki.org/wiki/SignalToNoiseRatio

    • zeta0134 6 hours ago
      It's pretty clear that in like half of these the "researcher" is just copy pasting the followup questions back into whatever LLM they used originally. What a colossal waste of everyone's time.

      I think the only saving grace right this second is that the hallucinations are obvious and text generation is just awkward enough in overly-eager phrasing to recognize. But if you're seeing it for the first time, it can be surprisingly convincing.

    • bluGill 2 hours ago
      As time goes on they are getting faster at closing such reports. However they started off with an assumption of honesty and only after peing burned repeatedly given up.

      this is bad for the honest person who can't describe a real issue well though.

    • raverbashing 6 hours ago
      Honestly? Might be wiser block submissions from certain parts of the world that are known for spamming things like that

      Or have an infosec captcha, but that's harder to come by

  • bgwalter 3 hours ago
    49 points, 4 hours, but only on page three.

    This is a highly relevant log of the destructive nature of "AI", which consumes human time and has no clue what is going on in the code base. "AI" is like a five year old who has picked up some words and wants to sound smart.

    I suppose the era of bug bounties is over.

  • AlSweigart 1 hour ago
    The primary use case of LLMs is producing undetectable spam.
  • heybrendan 5 hours ago
    I worked my way through about half the examples. What appalling behavior by several of the "submitters".

    This comment [1] by icing (curl staff) sums up the risk:

    > "This report and your other one seem like an attack on our resources to handle security issues."

    Maintainers of widely deployed, popular software, including those whom have openly made a commitment to engineering excellence [2] and responsiveness [like the curl project AFAICT], can not afford to /not/ treat each submission with some level of preliminary attention and seriousness.

    Submitting low quality, bogus reports generated by a hallucinating LLM, and then doubling down by being deliberately opaque and obtuse during the investigation and discussion, is disgraceful.

    [1] https://hackerone.com/reports/3125832#activity-34389935

    [2] https://curl.se/docs/bugs.html (Heading: "Who fixes the problems")

  • bfrog 3 hours ago
    AI slop is coming in all forms. I see people using AI for code reviews on github now and they are net negative leading people to do the wrong things.
  • anal_reactor 2 hours ago
    The consequence of having an issue report system is that people submit random shit just to report something. The fact that they use AI to autogenerate reports allows them to do that at an unprecedented scale. The obvious solution to this problem is to use AI to filter out reports that aren't valuable. Have AI talk to AI.

    This might sound silly but it's not. It's just an advanced version of automatic vulnerability scans.

  • mslansn 6 hours ago
    [flagged]
  • titaniumrain 6 hours ago
    [flagged]
    • Propelloni 6 hours ago
      The hostility is not geared towards AI, they are just dumb tools, but sloppy and stupid reporters. AI is enabling those stupid reporters. The linked report basically says in typical AI blathering (paraphrase) "you use strcpy(), strcpy() is known to cause buffer overflows if not bounds checked, thus you have buffer overflow bugs"

      Obviously, the logic doesn't hold. Anyway, asked to provide a specific line in a specific module where strcpy() is not bounds checked, the response is "probably in curl.c near a call to strcpy()." That moved from sloppy to stupid pretty quickly, didn't it?

      And there are dozens if not hundreds of these kinds of reports. Hostility towards the reporters (whether AI or not) is justified.

    • scott_w 6 hours ago
      I clicked on this report at random: https://hackerone.com/reports/2905552

      This isn’t fuzzing, this is a totally garbage report that I’d have chewed out any security “researcher” reporting this to me.

      Given how it was the first link I clicked I feel safe in saying the probability is the rest are just as bad.

    • mpalmer 4 hours ago
      "Fuzzing with AI", hilarious. Why do you think inexperienced clout chasers waited for LLMs before they started "fuzzing"? Why not use existing (real) fuzzing tools?

      Because they have no idea what they're doing and for some reason they think they can use LLMs to cosplay as security researchers.

      • UncleMeat 3 hours ago
        Heck, people have been playing with neural networks for input corpus generation for like what... 15 years?
    • 112233 6 hours ago
      Could you maybe focus more on fuzzing the software, and less on fuzzing maintainers? You seem to applaud people submitting unverified reports in mass, for material and reputational gain, harming maintainers' ability to react to valid reports. Why?
    • weird_trousers 6 hours ago
      I don’t think you took a look at the different reports :)

      All are wrong, with hallucinations, and reviewers clearly loses their time with that kind of things.

      AI is here to accelerate people’s job(s). Not losing their mind and time.

      Please read the news before responding. An AI can do that, why don’t you do that too…?

    • e2le 5 hours ago
      >By all means, use AI to learn things and to figure out potential problems, but when you just blindly assume that a silly tool is automatically right just because it sounds plausible, then you're doing us all (the curl project, the world, the open source community) a huge disservice. You should have studied the claim and verified it before you reported it.

      https://hackerone.com/reports/2887487

      Given the limited resources available to many open source projects and the volume of fraudulent reports, they function similar to a DDOS attack.

    • Rygian 6 hours ago
      Check out the actual reports. This is not "fuzzing with AI", it's semantically invalid reports (or outright fluff text masquerading as security mansplaining) showing absence of understanding of what is being reported.
    • whatevaa 5 hours ago
      Hostility is towards people using AI to generate garbage report spam at scale, which require time investigate, each one individually.
    • npteljes 6 hours ago
      They are not "just tools" - they are false reports, wasting the maintainers' precious time. The hostility comes from the asymmetry. AI generates info very fast, humans verify info much slower.

      Take a look at this example: https://hackerone.com/reports/2823554 . The fool reporting this can't even justify his AI generated report, not even with the further use of AI. There is no AI revolution here, just spam, and grift.

    • fastball 6 hours ago
      I don't think fuzzing means what you think it means.