| ▲ | avalys a day ago | |
All we can do is share anecdotes here, but I have found ChatGPT to be confidently incorrect about important details in nearly every question I ask about a complex topic. Legal questions, question about AWS services, products I want to buy, the history a specific field, so many things. It gives answers that do a really good job of simulating what a person who knows the topic would say. But details are wrong everywhere, often in ways that completely change the relevant conclusion. | ||
| ▲ | DBNO 13 hours ago | parent | next [-] | |
I definitely agree that ChatGPT can be incorrect. I’ve seen that myself. In my experience, though, it’s more often right than wrong. So when you say “in nearly every question on complex topics", I’m curious what specific examples you’re seeing. Would you be open to sharing a concrete example? Specifically: the question you asked, the part of the answer you know is wrong, and what the correct answer should be. I have a hypothesis (not a claim) that some of these failures you are seeing might be prompt-sensitive, and I’d be curious to try it as a small experiment if you’re willing. | ||
| ▲ | Jarwain 19 hours ago | parent | prev [-] | |
I don't think that LLM's do a significantly worse job than the average human professional. People get details wrong all the time, too. | ||