▲ | gonzobonzo 4 days ago | |||||||||||||||||||||||||||||||
> I’ve found ChatGPT and other LLMS can struggle to evaluate evidence - to understand the biases behind sources - ie taking data from a sketchy think tank as gospel. This is what I keep finding, it mostly repeats surface level "common knowledge." It usually take a few back and forths to get to whether or not something is actually true - asking for the numbers, asking for the sources, asking for the excerpt from the sources where they actually provide that information, verifying to make sure it's not hallucinating, etc. A lot of the time, it turns out its initial response was completely wrong. I imagine most people just take the initial (often wrong) response at face value, though, especially since it tends to repeat what most people already believe. | ||||||||||||||||||||||||||||||||
▲ | athrowaway3z 4 days ago | parent [-] | |||||||||||||||||||||||||||||||
> It usually take a few back and forths to get to whether or not something is actually true This cuts both ways. I have yet to find an opinion or fact I could not make chatgpt agree with as if objectivly true. Knowing how to trigger (im)partial thought is a skill in and of itself and something we need to be teaching in school asap. (Which some already are in 1 way or another) | ||||||||||||||||||||||||||||||||
|