▲ | netdevphoenix 19 hours ago | |||||||||||||||||||||||||
I always wonder why people not working or planning to work in infosec do this. I get giving up your free time to build open source functionality used by rich for-profit companies that will just make them rich because that's the nature of open source. But literally giving your free time to help a rich company get richer that I do not get. My only explanation is that they enjoy the process. It's like people spending their free time giving information and resources when they would not do that if that person was in front of them. | ||||||||||||||||||||||||||
▲ | 42lux 18 hours ago | parent | next [-] | |||||||||||||||||||||||||
You are on hackernews. It’s curiosity not only about the flaw in their system but also how they as a system react to the flaw. Tells you a lot about companies you can later avoid when recruiters knock or you send out resumes. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | bflesch 13 hours ago | parent | prev [-] | |||||||||||||||||||||||||
> rich company get richer They have heaps of funding, but are still fundraising. I doubt they're making much money. I do have an extensive infosec background, just left corporate security roles because it's a recipe for burnout because most won't care about software quality. Last year I've reported a security vulnerability in a very popular open source project and had to fight tooth and nail with highly-paid FAANG engineers to get it recognized + fixed. This ChatGPT vulnerability disclosure was a quick temperature check on a product I'm using on a daily basis. The learning for me is that their BugCrowd bug bounty is not worth to interact with. They're tarpitting vulnerability reports (most likely due to stupidity) and ask for videos and screenshots instead of understanding a single curl command. Through their unhelpful behavior they basically sent me on an organizational journey of trying to find a human at OpenAI who would care about this security vulnerability. In the end I failed to reach anyone at OpenAI, and due to sheer luck it got fixed after the exposure on HackerNews. This is their "error culture": 1) Their security team ignored BugCrowd reports 2) Their data privacy team ignored {dsar,privacy}@openai.com reports 3) Their AI handling support@openai.com didn't understand it 4) Their colleagues at Microsoft CERT and Azure security team ignored it (or didn't care enough about OpenAI to make them look at it). 5) Their engineers on github were either too busy or didn't care to respond to two security-related github issues on their main openai repository. 6) They silently disable the route after it pop ups on HackerNews. Technical issues: 1) Lack of security monitoring (Cloudflare, Azure) 2) Lack of security audits - this was a low hanging fruit 3) Lack of security awareness with their highly-paid engineers: I assume it was their "AI Agent" handling requests to the vulnerable API endpoint. How else would you explain that the `urls[]` parameter is vulnerable to the most basic "ignore previous instructions" prompt injection attack that was demonstrated with ChatGPT years ago. Why is this prompt injection still working on ANY of their public interfaces? Did they seriously only implement the security controls on the main ChatGPT input textbox and not in other places? And why didn't they implement any form of rate limiting for their "AI Agent"? I guess we'll never know :D | ||||||||||||||||||||||||||
|