Remix.run Logo
JyB a day ago

I believe it is extremely important to disclose that the ‘responses leaks’ you obtained did not originate from LLM models themselves, but rather through other insecure systems / in a more conventional manner.

Just to avoid yet another case of hallucinations outputs getting misinterpreted.

requilence a day ago | parent [-]

Right, thank you for the suggestion. Just added a paragraph to the original blog post.

tabletcorry a day ago | parent [-]

Your added paragraph appears to suggest the opposite, that this was an LLM response. Was the "leaked data" a response from an LLM directly?

JyB 6 hours ago | parent [-]

Yes apparently which makes this report pretty flimsy.

tptacek 3 hours ago | parent [-]

Upthread, OpenAI's security team confirms it's a false report; it's a variant of the empty-prompt hallucination.

JyB 12 minutes ago | parent [-]

Incredible that so many people still don't understand what an LLM is. Especially ones that you would expect to grasp it.