| ▲ | em-bee 2 hours ago | |
i don't quite follow your argument, i think the opposite is true. you should trust LLMs LESS than any random person. the problem is not whose fault it is. the problem is: are you even able to recognize that this information is wrong. if it is not the assistants fault then clearly the answer is no. you are not blaming the assistant for not recognizing the error. but, that means that most other people will also not recognize the error. those who do recognize the error are only able to do so because they have additional information that most other people would not have. i trust other humans because the cost of verifying everything is too expensive. this matters especially for information that is not of critical importance. getting some trivia wrong is at most embarrassing, it's not critical. LLMs get stuff wrong more often than humans, and so the risk of getting a wrong answer is higher, and therefore checking is always necessary, but that negates the benefit of using them in the first place. which means: you will only use LLMs if you intent to trust them. the same way i will only ask another human if i intent to trust them. when i ask a human to give me some information, then i am not asking a random person, but i am asking a person that i believe can give me the right answer because they have the necessary experience, skill, knowledge to give that answer. when i am asking an LLM, i am asking with the same expectation, otherwise, why would i even bother? it's not a question of infallibility. it's a question of usability. but to me, an LLM that is not infallible is also not usable. the problem is that LLMs promise more than they can actually do, and this article is one way to expose that false promise. it is news because LLMs are news. | ||