| ▲ | nottorp 10 hours ago | |
I don't think specific examples matter. My opinion is that since neural networks and especially these LLMs aren't quite deterministic, any kind of 'we want to avoid liability' censorship will affect all answers, related or unrelated to the topics they want to censor. And we get enough hallucinations even without censorship... | ||