| ▲ | lukev 14 hours ago | |||||||||||||||||||||||||
This is an issue of content, not transmission technology. Have you read the transcripts of any of these chats? It's horrifying. | ||||||||||||||||||||||||||
| ▲ | fragmede 9 hours ago | parent | next [-] | |||||||||||||||||||||||||
You can't have because they were redacted. If you tried to talk to ChatGPT prior to Adam Raine's case, it wouldn't help you, just like it won't one-shot answer the question "how do you make cocaine?" The court documents don't have the part where it refuses to help first. The crime here is that OpenAI didn't set conversation limits because when the context window gets exceeded it goes off the rails. Bing instituted this very early on. Claude has those guad rails. But for some reason, OpenAi chose not to implement that. The chats are horrifying, but it took a concerted dedicated effort to get ChatGPT to go there. If I drive through a sign that says Do Not Enter and fall off a cliff, who's really at fault? | ||||||||||||||||||||||||||
| ▲ | GaryBluto 14 hours ago | parent | prev [-] | |||||||||||||||||||||||||
>Have you read the transcripts of any of these chats? It's horrifying. Most LLMs reflect the user's attitudes and frequently hallucinate. Everybody knows this. If people misuse LLMs and treat them as a source of truth and rationality, that is not the fault of the providers. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||