▲ | astrange a day ago | |||||||
GPT-4o's writing style is so specific that I find it hard to believe it could fake a user query. You can spot anyone using AI writing a mile away. It stopped saying "delve" but started saying stuff like "It's not X–it's Y" and "check out the vibes (string of wacky emoji)" constantly. | ||||||||
▲ | wavemode a day ago | parent [-] | |||||||
LLMs are trained and fine-tuned on real conversations, so resembling a real conversation doesn't really rule out hallucination. If the story in OP about getting a company's private financial data is true (i.e. the numbers are correct and nonpublic) that could be a smoking gun. Either way it's a bad look for OpenAI to have not responded to this. Even if the resolution turns out to be that these are just hallucinations, it should've been investigated and responded to by now if OpenAI actually care about security. | ||||||||
|