▲ | wavemode a day ago | |
LLMs are trained and fine-tuned on real conversations, so resembling a real conversation doesn't really rule out hallucination. If the story in OP about getting a company's private financial data is true (i.e. the numbers are correct and nonpublic) that could be a smoking gun. Either way it's a bad look for OpenAI to have not responded to this. Even if the resolution turns out to be that these are just hallucinations, it should've been investigated and responded to by now if OpenAI actually care about security. | ||
▲ | lostmsu 19 hours ago | parent [-] | |
I would not say that OpenAI must respond in timely manner to bogus bug reports of any kind, this one included. |