Remix.run Logo
lrvick 20 hours ago

I run an infosec firm and we have done attacks like this on my clients over and over and over in audits. I always say any bored teen could do most of what we do because most companies are moving too fast feature farming to have any time for responsible security hardening, and now I have yet another great citation.

Unfortunately a competitive rate agreed to in advance with a company before we do any pentesting is the only way we have ever been able to get paid fairly for this sort of work. Finding bugs in the wild as this researcher did often gets wildly underpaid relative to the potential impact of the bug, if they pay or take it seriously at all.

These companies should be ashamed paying out so little for this, and it is only a matter of time before they insult the wrong researcher who decides to pursue paths to maximum profit, or maximum damage, with a vuln like this.

7 hours ago | parent | next [-]
[deleted]
jijijijij 18 hours ago | parent | prev [-]

> Unfortunately a competitive rate agreed to in advance with a company before we do any pentesting is the only way we have ever been able to get paid fairly for this sort of work.

So, rough estimate, how much would you have made for this?

lrvick 18 hours ago | parent [-]

We normally find things like this in our usual 60 hour audit blocks. Rates change over time with demand, but today an audit of that length would be $27k.

Even that is quite cheap compared to letting a blackhat find this.

lowkey_ 18 hours ago | parent [-]

If I can ask on business model, as I have a friend with a similar predicament — what percent of the time do you find vulnerabilities in those audits? Do companies push back if you don't find vulnerabilities?

lrvick 14 hours ago | parent | next [-]

We have never issued a clean report in our ~5 years of operation.

Some firms have a reputation for issuing clean reports that look good to bosses and customers, but we prefer working with clients that want an honest assessment of attack surface and how motivated blackhats will end their business.

We also stick around on retainer for firms that want security engineering consulting after audits to close the gaps we find and re-architect as needed. Unused retainer hours go into producing a lot of open source software to accelerate fixing the problems we see most often. This really incentivizes us to produce comprehensive reports that take into account how the software is developed and used in the real world.

Under our published threat model few companies pass level one, and we have helped a couple get close to level 2 with post audit consulting.

Our industry has a very long way to go as current industry standard practices are wildly dangerous and make life easy for blackhats.

https://distrust.co/threatmodel.html

rainonmoon 14 hours ago | parent | prev | next [-]

As someone in a related line of work: we find vulnerabilities so close to 100% of the time that it might as well be 100% of the time. Whether they're practically exploitable or surpass your risk appetite is the real question.

TheDong 13 hours ago | parent | prev [-]

These companies almost always produce "vulnerabilities", but they're also almost always trash.

"Finding: This dependency is vulnerable to CVE-X, update it, severity S". And then of course that dependency is only used during development, the vulnerable code isn't called, and they didn't bother to dig into that.

"Finding: Server allows TLS version 1.1, while it's recommended to only support version 1.2+", yeah, sure, I'm sure that if someone has broken TLS 1.1, they're coming for me, not for the banks, google, governments, apple, etc, everyone else still using TLS 1.1

... So yeah, all the audits will have "findings", they'll mostly be total garbage, and they'll charge you for it. If you're competent, you aren't going to get an RCE or XSS out of a security audit since it simply will not be there.