▲ | ACCount37 3 days ago | |||||||
It's nothing new. LLMs are unreliable, but in the same ways humans are. | ||||||||
▲ | latexr 3 days ago | parent | next [-] | |||||||
But LLMs output is not being treated the same as human output, and that comparison is both tired and harmful. People are routinely acting like “this is true because ChatGPT said so” while they wouldn’t do the same for any random human. LLMs aren’t being sold as unreliable. On the contrary, they are being sold as the tool which will replace everyone and do a better job at a fraction of the piece. | ||||||||
| ||||||||
▲ | krupan 3 days ago | parent | prev [-] | |||||||
Um, no. They are unreliable at a much faster pace and larger scale than any human. They are more confident while being unreliable than most humans (politicians and other bullshitters aside, most humans admit when they aren't sure about something). |