Remix.run Logo
ACCount37 3 days ago

It's nothing new. LLMs are unreliable, but in the same ways humans are.

latexr 3 days ago | parent | next [-]

But LLMs output is not being treated the same as human output, and that comparison is both tired and harmful. People are routinely acting like “this is true because ChatGPT said so” while they wouldn’t do the same for any random human.

LLMs aren’t being sold as unreliable. On the contrary, they are being sold as the tool which will replace everyone and do a better job at a fraction of the piece.

ACCount37 3 days ago | parent [-]

That comparison is more useful than the alternatives. Anthropomorphic framing is one of the best framings we have for understanding what properties LLMs have.

"LLM is like an overconfident human" certainly beats both "LLM is like a computer program" and "LLM is like a machine god". It's not perfect, but it's the best fit at 2 words or less.

krupan 3 days ago | parent | prev [-]

Um, no. They are unreliable at a much faster pace and larger scale than any human. They are more confident while being unreliable than most humans (politicians and other bullshitters aside, most humans admit when they aren't sure about something).