Remix.run Logo
thorum a day ago

> The leaked responses show clear signs of being real conversations: they start with contextually appropriate replies, sometimes reference the original user question, appear in various languages, and maintain coherent conversational flow. This pattern is inconsistent with random model hallucinations but matches exactly what you'd expect from misdirected user sessions.

A model like GPT-4o can hallucinated responses that are indistinguishable from real user interactions. This is easy to confirm for yourself: just ask it to make one up.

I’m certainly willing to believe OpenAI leaks real user messages, but this is not proof of that claim.

requilence a day ago | parent | next [-]

In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.

Xx_crazy420_xX a day ago | parent | next [-]

Did you try to ask it to provide data of the company, by explicitly invoking hallucination in the model?

Right now there is no real proof, untill you confirm that the data it provided cannot be hallucinated (which could be not feisable).

Also, acknowledging the response fron OpenAI staff dismissing it, would you mind sharing PoC?

krainboltgreene a day ago | parent | prev [-]

I’m struggling to understand why you are so adamant that this is proof.

robertclaus a day ago | parent | prev | next [-]

Ya, hard to know how to react without more information.

astrange a day ago | parent | prev [-]

GPT-4o's writing style is so specific that I find it hard to believe it could fake a user query.

You can spot anyone using AI writing a mile away. It stopped saying "delve" but started saying stuff like "It's not X–it's Y" and "check out the vibes (string of wacky emoji)" constantly.

wavemode a day ago | parent [-]

LLMs are trained and fine-tuned on real conversations, so resembling a real conversation doesn't really rule out hallucination.

If the story in OP about getting a company's private financial data is true (i.e. the numbers are correct and nonpublic) that could be a smoking gun.

Either way it's a bad look for OpenAI to have not responded to this. Even if the resolution turns out to be that these are just hallucinations, it should've been investigated and responded to by now if OpenAI actually care about security.

lostmsu 18 hours ago | parent [-]

I would not say that OpenAI must respond in timely manner to bogus bug reports of any kind, this one included.