▲ | dwohnitmok 4 hours ago | |
> You have to treat LLMs as basically similar to human beings Yes! Increasingly I think that software developers consistently underanthropomorphize LLMs and get surprised by errors as a result. Thinking of (current) LLMs as eager, scatter-brained, "book-smart" interns leads directly to understanding the overwhelming majority of LLM failure modes. It is still possible to overanthropomorphize LLMs, but on the whole I see the industry consistently underanthropomorphizing them. | ||
▲ | Terr_ 22 minutes ago | parent [-] | |
I think it's less over/under, and more optimistically/pessimistically. People focus too much on how they can succeed looking like smart humans, instead of protecting the system from how they can fail looking like humans that are malicious or mentally unwell. |