| ▲ | andrewflnr 2 hours ago | |||||||
You don't need to know how an LLM works to realize "sometimes the magic ChatGPT box tells me wrong things". Even if you fully fall for the anthropomorphism, this only requires the same level of awareness as realizing that after the third or fourth thing your weird uncle tells you that turns out not to be true, maybe you shouldn't take him at his word. | ||||||||
| ▲ | ben_w an hour ago | parent [-] | |||||||
If human psychology worked like that, lotteries wouldn't be a thing. Nor prayer. There wouldn't be horoscopes in newspapers, nor homeopathy. One of the various oddities going on with LLMs in particular is them being trained with feedback from users having a chance to upvote or downvote responses, or A/B test which of two is "better". This naturally leads to things which are more convincing, though this only loosely correlates to "more correct". | ||||||||
| ||||||||