I'm sorry that the security industry is a cesspool. We all know it's a cesspool. We can't pump it out.
However, please do not let the absolute state of things cause you to give up on security. Don't stop patching, don't go back to writing your passwords on post-it notes, don't just expose everything to the open internet and don't let an LLM perform your only code security review. Keep doing the boring, basic things, and you'll have the best chance at keeping the attackers out.
Ultimately security is a chore, like showering or visiting the dentist. And there are always going to be people telling you that you absolutely must apply deodorant to your groin or that you can avoid the dentist by rinsing with apple cider vinegar. Ignore them, and just keep doing the basics as well as you can.
Reminds me of working on Chalk (JavaScript terminal coloring library).
My first foray into "beg bounties" was with Chalk. We received a report that inputs that contained malicious terminal escape sequences would be emitted to the terminal if passed through chalk.
But... yeah, of course they would. It's just a glorified string formatter, we don't care what you pass to us. They would have been emitted anyway if you just used console.log. There was literally nothing actionable for us to do. It wasn't our responsibility.
It wasn't just left there. The "researcher" persisted, threatening to file a CVE (which wreaks havoc on an OSS dependency such as Chalk that has millions of downloads a day) and kept swinging their proverbial member around with how they work for soandso esteemed research company (it wasn't) and ultimately demanded we compensate for their time, citing it as a responsibility and obligation.
I would have ignored it, but the threat of CVE (and the fact we'd have literally zero recourse against it) kept me on the hook.
Ever since then it's really watered down my view of the CVE/CVSS systems and has turned me a bit bitter toward "security researchers" in general, which isn't where I'd like to be.
With the rise of automatic ReDos detection the problem has only compounded over the last 4-5 years - things that might even technically fall under the "vulnerability" umbrella, but only if the code is so intentionally and egregiously misused, and in a manner that is dangerous with any library let alone ours, still earning a plea for monetary compensation.
It's silly, saddening and only discourages people to work on OSS stuff at a scale larger than hobbies, to be honest.
(Thank you for coming to my TED talk)
EDIT: I should mention that I have received legitimate reports by well-meaning researchers (no quotes) that are detailed and professional in nature I'm always proud to service them and see them through. They're increasingly rare, though. The downside is that doing OSS for free means I still cannot compensate for their time, even though I would love to.
Wow. On the other side of the table, for system owners it dilutes the meaning of CVEs and causes alarm fatigue. I have a friend who says his npm logs are full of scary CVE/security vulns that are not relevant. The cost of upgrading is high, often breaking changes due to average js package quality, and the only reason is to suppress a spurious warning.
The thing security people refuse to accept though, is that security isn’t a paramount business concern, even if management understands the real risks. Stolen customer data is often followed by an apology and password reset request. Nobody cares, especially in a world where personal and private data no longer exists. Restore from backup, and move on.
Should it be that way? No. But it is. It’s not a security or awareness problem, it’s a business/culture problem. You can’t fix a broken engine with a better taillight.
ReDos often get 7 or above since they can be "network accessible" (anything can be). CVSS is broken.
And yes, people who get scared file reports asking us to fix it, oftentimes we can't (because it's not a real vuln), and 99% of the time they're not even remotely affected. The requests are generally not very nice, either.
Most of the time, the request to fix a CVE is the first we've ever heard of it being filed. "Security research agencies" oftentimes have their own databases, and rarely do these "researchers" follow any amount of responsible disclosure, let alone even telling us about their findings or that they've filed. Many of them don't even cite a reporter, email, or any such contact information.
Especially when the CVE is nonsense, improperly scored, etc. there is almost no way to get it taken down or to reduce its severity. Apparently in the eyes of these agencies the people writing the code cannot adequately gauge the severity of vulnerabilities. It's nearing levels of almost zero collaboration with maintainers before things are filed.
I can't remember the last time I even had the opportunity to release a proper fix before a CVE went out. I can count on one hand the number of times I was asked before a CVE was filed. I don't think a single CVE of any code I maintain has had a score that properly reflected the severity of the actual vulnerability (assuming there even was one).
Remember, most of us do this in free time, for free. So all of the extra bureaucracy and synthesized urgency can be extremely detrimental.
Your front door is accessible to the public. Without a fence around your yard and a guard at the gate, bad actors could get access to your front door and exploit any number of door vulnerabilities, including a chainsaw or battering ram.
This is exactly what happens whenever you publish a website online, with so called "ethical hackers" finding minor vulnerabilities, hoping to get paid for running a script. See also https://www.troyhunt.com/beg-bounties/
My favourite one that i've seen 4 of 5 iterations of, is people claiming about a privilege escalation or authentication bypass bug, then their reproduction is a convoluted way of doing:
Don't trust this person, it’s a scam. A real security consultant would have mentioned defense in depth and offered you a discount on their WAF offering.
Beg bounty hunters have damaged the field so much.
But even in 2025, I have come across companies who do not at all care about rewarding good security researchers who report issues. Hell, I have even been ghosted after reporting the bug which they promptly fixed and did not even write back to say a "thank you". Has anyone else also encountered this behavior from tech companies? (not talking about a non profit, hospital or gov agency here)
I'm a security researcher - no quotes. I write detailed, highly technical write-ups for all of the issues I discover, including reproduction steps, root cause analysis and suggestions for fixes. I follow all responsible disclosure guidelines + any guidelines that the company or entity might have for security disclosures.
It's disheartening when you put this amount of effort into it, it gets silently patched, and you get no recognition or even a "thank you". But I don't let it bother me too much. I'm doing this research mostly for myself and because I find it interesting. The fact that I'm disclosing the issues is me being a good citizen, but I shouldn't expect a pat on the head for every issue I disclose.
Being ignored always sucks. But it's still infinitely better than doing all of the above and being threatened with a lawsuit (which has, unfortunately, happened as well).
Without feedback you don't know that the bug was fixed in reaction to your bug report. It might have been - but unless they explicitly invited bug reports in return for something then it's at worst bad manners not to acknowledge in that case. Debatably poor self-interest on their part as well.
As you note, the field has been damaged by bounty hunters. When the SNR drops low enough there's no point even reading the damn things and high-quality reports will be discarded along with the dross.
> Without feedback you don't know that the bug was fixed in reaction to your bug report.
In this particular case, they did say they will consider a reward for a severe bug (it was severe, DNS hijack) and then once I shared details, the next day I checked, they had fixed it and never wrote back.
> Is this for reporting bugs outside of the bug bounty platforms?
Nah, in this case they simply had no official bug bounty program/platform.
I would guess that a big factor is mindset and tech culture across different companies or having a bad head of something who doesn't get the point of bug bounty / promoting responsible disclosure.
Yeah, it's pretty common. Some years back I reported a stored XSS vulnerability in an online marketplace with hundreds of thousands of users - a proper writeup with HTTP requests, proof of concept, impact, etc. No mention of bounties/rewards or anything like that - just a vulnerability report.
I made multiple attempts to report it to their security team/mailbox over a several months and never got any response or acknowledgement back from them. Then a few months later they quietly fixed the issue.
Yup. I reported what I considered to be a serious security flaw to a _security company's_ product. They wrote back (since I was a paying customer), telling me that it wasn't a security issue. A few weeks later they had patched the totally-not-a-problem thing.
> Beg bounty hunters have damaged the field so much.
Sure, the grifters themselves are guilty too. But hear me out: maybe the corporate geniuses who decided to crowdsource security using non-contractual if-we-feel-like-it bounty payments could have contributed to the grifting culture.
> Hell, I have even been ghosted after reporting the bug which they promptly fixed and did not even write back to say a "thank you".
Just curious, why perform labor without a contract? If it’s just for personal interest, I wouldn’t even bother to report unless the company has something to offer first.
For sure. I've worked at orgs where we disabled package vulnerability scanners because they created a constant stream of upgrading busywork. So many "vulnerabilities" are things like "JavaScript prototype pollution in this package that does something in your build toolchain". So much noise and so very little signal, the incentives of these scanning and vuln tracking companies just aren't aligned well I don't think.
Nowadays I tend to more rely on tech news to hear when there's an actual serious vuln I need to address.
(Note I'm not advocating everyone do this. Do your own risk assessment).
Note that tech news is biased towards flashy or relatable security issues. Nobody is going to n-day your phone (though you should, of course, keep it up to date). It's your Drupal you should worry about.
The issue is that understanding what is actually exploitable and what is actually a part of your threat model is difficult, it's a pretty high bar, a bar not met by most people that typically have decision making power around a product or service. It's a huge problem but it's not particularly easy to fix so it's pretty obvious why the industry has taken the route of deciding certifications and scans = security and that vulnerabilities only exist if they have a CVE assigned, and anything with a CVE assigned must be an actual problem.
I had someone argue that Wordpress had terrible security.
The only CVE's it had for 2 years only happened if you allowed random users to sign up.
There is a firewall plugin and basically the only thing it does is check if you have outdated plugins and log all the times a bot tried to log in by going posting user:admin password:admin to /wp-login.php. It's rare but a few of them tried my domain name as username instead. It sends me e-mails about new vulnerabilities found, and it's always some plugin. Sure, some of them are "installed" in thousands or millions of websites, but it's never anything in the Wordpress core itself.
If you hide /wp-login.php and avoid dependencies, it's practically impenetrable since it has to be the most battle-tested CMS out in the wild, and yet people swear it's Swiss cheese of security holes.
No, they really don't. Exploits are rare, relative to the amount of code being run. And most "threats" are discussed absolutely, instead of relative to a threat model and mostly devoid of any context.
> Want to be a bounty beggar? It's dead simple, you just use tools like Qualys' SSL Labs, dmarcian or Scott Helme's Security Headers, among others. Easy point and shoot magic and you don't need to have any idea whatsoever what you're doing!
Alternatively to bounty begging, one could use these tools to find vulnerabilities and then try to figure out why it is a vulnerability and how it might be practically exploited. Seems like a good way to learn real security research. (Don’t actually exploit it, though...)
If you wanna get an abuse address, resolve to the IP and query RIPE for the abuse mail. Every RIPE member needs to specify an abuse address and what they specify is the source of truth for their AS. No need to query a crowdsourced hearsay service.
I did not know of this service until now, so any correct result it has for any of my domains is a matter of coincidence.
Same happens to us regularly. Server called 'downloads', in a directory 'public' whose homepage has the text 'this is a public server with files for very everybody to download'. There's even a file 'all-files-are-open-and-public-not-company-secrets.txt' in the directory.
I’m convinced that the cybersecurity and security research industry is largely a pass-the-blame market. And honestly, that's not a bad thing — it's smart and even necessary. If a publicly traded company suffers a security breach, they can say, "We hired [security firm] to harden our systems — if something went wrong, it’s on them." This way, the company deflects blame, protects its reputation, and keeps shareholders satisfied. All-in-all not a bad strat
The side effect of this process is that if [security firm] is doing their job then the systems do actually get hardened and the number of breaches is reduced as a result. In other words, this machine works as intended.
A common challenge is assessing whether [security firm] did actually do their job, or whether there just weren't any tigers around here in the first place. Hence, SOC2.
Its unfortunately even worse than that, in my opinion. Security is making computers not do things. Software engineers spend much of their day trying to make computers do new things. It is almost necessary that security work adds friction to other work.
So not only is it often difficult to measure the actual impact of a security mitigation, it is often possible (or even easy) to measure the friction caused by a security mitigation. You really need everybody to believe in the necessity of a mitigation or else it becomes incredibly easy to cut.
Got you covered. All sfw, just the filenames are a bit offensive, if you speak dutch. It's like that "two naked tits/boobies" (the birds) joke and others.
Critical vulnerability on port 80: an attacker could exfiltrate all comments posted therein. Please provide a bug bounty for this critical vulnerability.
Hi. System Owner here. Funny to see this pop up on ycombinator and thanks for pointing out a security.txt was missing. Fair point. I've added it with a clearer note not to report "open directory", specifically. Never forget to have a wonderful day, everyone.
There's a harmless "vulnerability" that some automated scanners keep finding on my website. I've deliberately left it "unfixed", and block everyone who emails me about it.
> There is NO SENSITIVE INFORMATION on this server.
So if hypothetically I would find a .csv file with emails, names, dates of births and addresses on this website, I should not send an email because it can't possibly be a data leak.
For those who haven't been on the receiving side of a beg bounty, you'd get an email something like this (I make no claims to its correctness):
To: abuse@yourdomain.com
Subject: Bug bounty , PII data made available port 22. Please provide bug bounty for critical software flaw.
Issue description
This is critical, exploitation of the ftp server provides source code to a popular debian server allowing attacker to sidestep usual reverse engineering procedures required to attack a system. (Authentication Bypass).
I will release this bug in thirty (30) days if no bug bounty has been granted and attackers will be able to take full advantage of this problem.
Reproducibility
This issue is trivial to reproduce, with popular hacking tools such as ftp and internet explorer.
Bounty value
Please be mindful and understand that this research takes up many hours and bugs like this can fetch up to $25,000 on popular bug bounty programs ( https://www.hackerone.com/ ).
That sounds like the IT equivalent of those “I have video of you masturbating” emails (which, by the way, work -I once had a friend tearfully beg for my help, when he got one. It was difficult to convince him it was a scam).
I'm sorry that the security industry is a cesspool. We all know it's a cesspool. We can't pump it out.
However, please do not let the absolute state of things cause you to give up on security. Don't stop patching, don't go back to writing your passwords on post-it notes, don't just expose everything to the open internet and don't let an LLM perform your only code security review. Keep doing the boring, basic things, and you'll have the best chance at keeping the attackers out.
Ultimately security is a chore, like showering or visiting the dentist. And there are always going to be people telling you that you absolutely must apply deodorant to your groin or that you can avoid the dentist by rinsing with apple cider vinegar. Ignore them, and just keep doing the basics as well as you can.
My first foray into "beg bounties" was with Chalk. We received a report that inputs that contained malicious terminal escape sequences would be emitted to the terminal if passed through chalk.
But... yeah, of course they would. It's just a glorified string formatter, we don't care what you pass to us. They would have been emitted anyway if you just used console.log. There was literally nothing actionable for us to do. It wasn't our responsibility.
It wasn't just left there. The "researcher" persisted, threatening to file a CVE (which wreaks havoc on an OSS dependency such as Chalk that has millions of downloads a day) and kept swinging their proverbial member around with how they work for soandso esteemed research company (it wasn't) and ultimately demanded we compensate for their time, citing it as a responsibility and obligation.
I would have ignored it, but the threat of CVE (and the fact we'd have literally zero recourse against it) kept me on the hook.
Ever since then it's really watered down my view of the CVE/CVSS systems and has turned me a bit bitter toward "security researchers" in general, which isn't where I'd like to be.
With the rise of automatic ReDos detection the problem has only compounded over the last 4-5 years - things that might even technically fall under the "vulnerability" umbrella, but only if the code is so intentionally and egregiously misused, and in a manner that is dangerous with any library let alone ours, still earning a plea for monetary compensation.
It's silly, saddening and only discourages people to work on OSS stuff at a scale larger than hobbies, to be honest.
(Thank you for coming to my TED talk)
EDIT: I should mention that I have received legitimate reports by well-meaning researchers (no quotes) that are detailed and professional in nature I'm always proud to service them and see them through. They're increasingly rare, though. The downside is that doing OSS for free means I still cannot compensate for their time, even though I would love to.
The thing security people refuse to accept though, is that security isn’t a paramount business concern, even if management understands the real risks. Stolen customer data is often followed by an apology and password reset request. Nobody cares, especially in a world where personal and private data no longer exists. Restore from backup, and move on.
Should it be that way? No. But it is. It’s not a security or awareness problem, it’s a business/culture problem. You can’t fix a broken engine with a better taillight.
And yes, people who get scared file reports asking us to fix it, oftentimes we can't (because it's not a real vuln), and 99% of the time they're not even remotely affected. The requests are generally not very nice, either.
Most of the time, the request to fix a CVE is the first we've ever heard of it being filed. "Security research agencies" oftentimes have their own databases, and rarely do these "researchers" follow any amount of responsible disclosure, let alone even telling us about their findings or that they've filed. Many of them don't even cite a reporter, email, or any such contact information.
Especially when the CVE is nonsense, improperly scored, etc. there is almost no way to get it taken down or to reduce its severity. Apparently in the eyes of these agencies the people writing the code cannot adequately gauge the severity of vulnerabilities. It's nearing levels of almost zero collaboration with maintainers before things are filed.
I can't remember the last time I even had the opportunity to release a proper fix before a CVE went out. I can count on one hand the number of times I was asked before a CVE was filed. I don't think a single CVE of any code I maintain has had a score that properly reflected the severity of the actual vulnerability (assuming there even was one).
Remember, most of us do this in free time, for free. So all of the extra bureaucracy and synthesized urgency can be extremely detrimental.
Please send me $12,000 dollars.
1. Log in with admin credentials
2. Copy the session cookie
3. Run curl command using above session cookie
4. You have bypassed authentication!!
But even in 2025, I have come across companies who do not at all care about rewarding good security researchers who report issues. Hell, I have even been ghosted after reporting the bug which they promptly fixed and did not even write back to say a "thank you". Has anyone else also encountered this behavior from tech companies? (not talking about a non profit, hospital or gov agency here)
I'm a security researcher - no quotes. I write detailed, highly technical write-ups for all of the issues I discover, including reproduction steps, root cause analysis and suggestions for fixes. I follow all responsible disclosure guidelines + any guidelines that the company or entity might have for security disclosures.
It's disheartening when you put this amount of effort into it, it gets silently patched, and you get no recognition or even a "thank you". But I don't let it bother me too much. I'm doing this research mostly for myself and because I find it interesting. The fact that I'm disclosing the issues is me being a good citizen, but I shouldn't expect a pat on the head for every issue I disclose.
Being ignored always sucks. But it's still infinitely better than doing all of the above and being threatened with a lawsuit (which has, unfortunately, happened as well).
As you note, the field has been damaged by bounty hunters. When the SNR drops low enough there's no point even reading the damn things and high-quality reports will be discarded along with the dross.
In this particular case, they did say they will consider a reward for a severe bug (it was severe, DNS hijack) and then once I shared details, the next day I checked, they had fixed it and never wrote back.
I did not know bug bounty had such a bad rep. Is this for reporting bugs outside of the bug bounty platforms?
Nah, in this case they simply had no official bug bounty program/platform.
I would guess that a big factor is mindset and tech culture across different companies or having a bad head of something who doesn't get the point of bug bounty / promoting responsible disclosure.
I made multiple attempts to report it to their security team/mailbox over a several months and never got any response or acknowledgement back from them. Then a few months later they quietly fixed the issue.
Sure, the grifters themselves are guilty too. But hear me out: maybe the corporate geniuses who decided to crowdsource security using non-contractual if-we-feel-like-it bounty payments could have contributed to the grifting culture.
> Hell, I have even been ghosted after reporting the bug which they promptly fixed and did not even write back to say a "thank you".
Just curious, why perform labor without a contract? If it’s just for personal interest, I wouldn’t even bother to report unless the company has something to offer first.
I'm too tired of the current scareware industry to write more.
The sad part is real security issues can get lost in the noise...
Nowadays I tend to more rely on tech news to hear when there's an actual serious vuln I need to address.
(Note I'm not advocating everyone do this. Do your own risk assessment).
But those tend to be against journalists and activists.
What threat model you operate under is a nontrivial problem.
The only CVE's it had for 2 years only happened if you allowed random users to sign up.
There is a firewall plugin and basically the only thing it does is check if you have outdated plugins and log all the times a bot tried to log in by going posting user:admin password:admin to /wp-login.php. It's rare but a few of them tried my domain name as username instead. It sends me e-mails about new vulnerabilities found, and it's always some plugin. Sure, some of them are "installed" in thousands or millions of websites, but it's never anything in the Wordpress core itself.
If you hide /wp-login.php and avoid dependencies, it's practically impenetrable since it has to be the most battle-tested CMS out in the wild, and yet people swear it's Swiss cheese of security holes.
https://news.ycombinator.com/item?id=38845878
> Want to be a bounty beggar? It's dead simple, you just use tools like Qualys' SSL Labs, dmarcian or Scott Helme's Security Headers, among others. Easy point and shoot magic and you don't need to have any idea whatsoever what you're doing!
Wrong place, did not read. Here go the ``security researchers'' begging/threatening for money.
~ $ whois -h whois.abuse.net ftp.bit.nl
abuse@bit.nl (for bit.nl)
I did not know of this service until now, so any correct result it has for any of my domains is a matter of coincidence.
A common challenge is assessing whether [security firm] did actually do their job, or whether there just weren't any tigers around here in the first place. Hence, SOC2.
So not only is it often difficult to measure the actual impact of a security mitigation, it is often possible (or even easy) to measure the friction caused by a security mitigation. You really need everybody to believe in the necessity of a mitigation or else it becomes incredibly easy to cut.
I'm at work and a little afraid of clicking on the 'pr0n' folder :)
Critical vulnerability on port 80: an attacker could exfiltrate all comments posted therein. Please provide a bug bounty for this critical vulnerability.
The proof of concept malicious content: cross the road on red.
Please think of the children!
Please send me €10.000 for this disclosure.
So if hypothetically I would find a .csv file with emails, names, dates of births and addresses on this website, I should not send an email because it can't possibly be a data leak.
To: abuse@yourdomain.com Subject: Bug bounty , PII data made available port 22. Please provide bug bounty for critical software flaw.
Issue description
This is critical, exploitation of the ftp server provides source code to a popular debian server allowing attacker to sidestep usual reverse engineering procedures required to attack a system. (Authentication Bypass).
I will release this bug in thirty (30) days if no bug bounty has been granted and attackers will be able to take full advantage of this problem.
Reproducibility
This issue is trivial to reproduce, with popular hacking tools such as ftp and internet explorer.
Bounty value
Please be mindful and understand that this research takes up many hours and bugs like this can fetch up to $25,000 on popular bug bounty programs ( https://www.hackerone.com/ ).
Source: https://jblevins.org/log/ssh-vulnkey