| ▲ | requilence a day ago |
| Reported a flaw to OpenAI that lets users peek at others' chat responses. Got an auto-reply on May 29th, radio silence since. Issue remains unpatched :(
Avoided their bug bounty due to permanent NDAs preventing disclosure even after fixes. Following standard 45-day disclosure window—users should avoid sharing sensitive data until this is resolved. |
|
| ▲ | jonrouach a day ago | parent | next [-] |
| you're sure it's not their "feature" that calling the api with empty string returns random hallucinations? https://jarbon.medium.com/gpt-prompt-bug-94322a96c574 |
| |
| ▲ | requilence a day ago | parent [-] | | No, definitely not the empty string hallucination bug. These are clearly real user conversations. They start like proper replies to requests, sometimes reference the original question, and appear in different languages. | | |
| ▲ | jonrouach a day ago | parent | next [-] | | i had the exact same behavior back in 2023, it seemed like clearly leakage of user conversations - but it was just a bug with api calls in the software i was using. https://snipboard.io/FXOkdK.jpg | | |
| ▲ | postalcoder a day ago | parent [-] | | There was an issue with conversation leakage, though. It involved some bug with Redis. I felt like it was a huge deal at the time but it’s surprisingly hard to quickly google it. | | |
| |
| ▲ | JyB a day ago | parent | prev | next [-] | | I don’t see anything here that would prevent a LLM from generating these. Right? | | |
| ▲ | requilence a day ago | parent [-] | | In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`. | | |
| ▲ | BoiledCabbage 16 hours ago | parent | next [-] | | > numbers in the response are real. OpenAI very well may have a bug, but I'm not clear on this part. How do you know the numbers are real? I understand you know the name is the company is real, but how do you know the numbers are real? It's way may than anyone should need to do, but the only way I can see someone knowing this is contacting the owners is the company. | |
| ▲ | Sebguer a day ago | parent | prev [-] | | Do you understand what a hallucination is? | | |
| ▲ | jojobas a day ago | parent [-] | | Coming up with accurate financial data that you can't get it to report outright doesn't seem like one. | | |
| ▲ | Sebguer a day ago | parent | next [-] | | Models do not possess awareness of their training data. Also you are taking at face value that it is "accurate". | |
| ▲ | refulgentis a day ago | parent | prev [-] | | I don't understand the wording Accurate financial data? How do we know? What does using not-web-search not having the data have to do with the claim that private chats with the data are being leaked? | | |
| ▲ | 01HNNWZ0MV43FF a day ago | parent [-] | | > I found this company; it is real and numbers in the response are real. ??? | | |
| ▲ | refulgentis a day ago | parent [-] | | Which of my questions does that answer? | | |
| ▲ | queenkjuul 21 hours ago | parent [-] | | That the financial data is accurate? | | |
| ▲ | refulgentis 16 hours ago | parent [-] | | It's an ourobos - he can't verify it's real! If he can, its online and available by search. | | |
| ▲ | JyB 6 hours ago | parent [-] | | Therefore what are the odds that this is just the LLM doing its thing versus "a vulnerability".
Seem like a pretty obvious bet. |
|
|
|
|
|
|
|
|
| |
| ▲ | addandsubtract 20 hours ago | parent | prev [-] | | New Touring Test unlocked! Differentiate between real and fake hallucinations. | | |
|
|
|
| ▲ | 999900000999 a day ago | parent | prev | next [-] |
| Users should always avoid sharing sensitive data. A lot of AI products straight up have plan text logs available for everyone at the company to view. |
| |
| ▲ | ameliaquining a day ago | parent | next [-] | | Which ones? Do you just mean tiny startups and side projects and the like or is this a problem that major model providers have? | |
| ▲ | pyman a day ago | parent | prev [-] | | It's not just about sensitive data like passwords, contracts, or IP. It's also about the personal conversations people have with ChatGPT. Some are depressed, some are dealing with bullying, others are trying to figure out how to come out to their parents. For them, this isn't just sensitive, it's life-changing if it gets leaked. It's like Meta leaking their WhatsApp messages. I really hope they fix this bug and start taking security more seriously. Trust is everything. | | |
| ▲ | milkshakes a day ago | parent [-] | | maybe you should stop trusting random people on the internet making extraordinary claims without proof then? | | |
| ▲ | baby_souffle a day ago | parent | next [-] | | Isn't "assume vulnerable" The only prudent thing to do here? | | |
| ▲ | milkshakes a day ago | parent | next [-] | | everything is vulnerable. the question is, has this researcher demonstrated that they have discovered and successfully exploited such a vulnerability. what exactly in this post makes you believe that this is the case? | |
| ▲ | refulgentis a day ago | parent | prev [-] | | No? Yes? Mu? After some hemming and hawing, my most cromulent thought is, having good security posture isn't synonymous with accepting every claim you get from the firehose |
| |
| ▲ | 999900000999 a day ago | parent | prev [-] | | https://arstechnica.com/tech-policy/2025/07/nyt-to-start-sea... | | |
| ▲ | ameliaquining a day ago | parent [-] | | This is going to be subject to the legal discovery process with the usual safeguards to prevent leaks; in particular, the judge will directly supervise the decision of who needs access to these logs, and if someone discloses information derived from them for an improper purpose, there's a very good chance they'll go to jail for contempt of court, which is much more stringent than you can usually expect for data privacy. You can still quite reasonably be against it, but you cannot reasonably call it "plain text logs available for everyone at the company to view". |
|
|
|
|
|
| ▲ | com2kid a day ago | parent | prev | next [-] |
| I see other users conversations on my Gemini dashboard, not sure who to even complain to. Software quality is... Minimal now days. |
|
| ▲ | poniko a day ago | parent | prev | next [-] |
| The NDA part feels really murky. |
| |
| ▲ | tptacek a day ago | parent [-] | | It's pretty standard for bounty programs. If you don't like it, which is reasonable, do what this researcher did and just post independently. | | |
| ▲ | asadotzler a day ago | parent | next [-] | | That's an exaggeration. Most industry leaders do not require NDAs, only coordinated disclosure. Mozilla's program, which has been around longer than most, doesn't. Google and Microsoft don't. Meta and Apple don't. This is water carrying, intentional or not, for a terrible practice that should be shamed, so that it doesn't become standard. | | |
| ▲ | tptacek a day ago | parent [-] | | My understanding is that all Bugcrowd bounties do by default. You can shame it all you want, but you can also just publish your bugs directly. Nobody has to use the Bugcrowd platform. You don't even have to wait 45 days; I don't buy these "CERT/CC" rules. |
| |
| ▲ | pyman a day ago | parent | prev [-] | | The bug bounty world is a funny one. I remember one complaining that their bug was dismissed and fixed after they signed an NDA, no payout, nothing. Another one got $100 instead of $5,000 because the company downgraded the severity from high to low. So they ended up with little or no money, and no recognition either. Not sure if these were edge cases, but it does make you wonder how fair the process really is. | | |
| ▲ | tptacek a day ago | parent [-] | | If you're dealing with large companies, a good rule of thumb is that the bounty program is incentivized to pay you out. Their internal metrics improve the more they pay; the point is to turn up interesting bugs, and the figure of merit for that is "how much did we have to spend". At a large company, a bounty that isn't paying anything out is a failure. All bets are off with small random startups that do bug bounties because they think they're supposed to (most companies should not run bounties). But that's not OpenAI. Dave Aitel works at OpenAI. They're not trying to stiff you. Simultaneous discovery (either with other researchers or, even more often, with internal assessments) is super common. What's more, you're not going to get any corroboration or context for them (sets up a crazy bad incentive with bounty seekers, who litigate bounty results endlessly). When you get a weird and unfair-seeming response to a bounty from a big tech company, for the sake of your own sanity (and because you'll probably be right), just assume someone internal found the bug before you did, and you reported it in the (sometimes long) window during which they were fixing it. | | |
|
|
|
|
| ▲ | fcpguru a day ago | parent | prev | next [-] |
| well done, sounds very reasonable and following the rules. |
| |
| ▲ | requilence a day ago | parent [-] | | Appreciate it. Just trying to do the right thing by both OpenAI and users here. |
|
|
| ▲ | 15 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | maxlin a day ago | parent | prev [-] |
| Permanent NDA's? Oof. It's like their plan is to just try to force the lid down till they reach ASI or something lol |
| |