| ▲ | mmooss 6 hours ago | |
They said earlier that they didn't verify the quotes. I understand them to mean that the LLM outputted text that included quotes. They assumed the output was accurate and found it so appealing, on an emotional level, that they just went with it without checking. The most valuable lesson here, by far, is not about other people but about ourselves. This person is trained, takes it seriously, and advocates for making sure the AI is supervised, and got caught in the emotional manipulation of LLM design [0]. We all are at risk. If we look at the other person and mock them, and think we are better than them, we are only exposing ourselves to more risk. If we think - oh my goodness, look what happened, this is perilous - then we gain from what happened and can protect ourselves. (We might also ask why this valuable tool also includes such manipulative interface. Don't take it for granted; it's not at all necessary for LLMs to work, and they could just as easily sound like a-holes.) [0] I mean that obviously they are carefully designed to sound appealing | ||