Remix.run Logo
stavros 3 hours ago

This is only an issue if you think LLMs are infallible.

If someone said "I asked my assistant to find the best hot-dog eaters in the world and she got her information from a fake article one of my friends wrote about himself, hah, THE IDIOT", we'd all go "wait, how is this your assistant's fault?". Yet, when an LLM summarizes a web search and reports on a fake article it found, it's news?

People need to learn that LLMs are people too, and you shouldn't trust them more than you'd trust any random person.

kulahan 3 hours ago | parent | next [-]

A probably unacceptably large portion of the population DOES think they’re infallible, or at least close to it.

jen729w 3 hours ago | parent | next [-]

Totally. I get screenshots from my 79yo mother now that are the Gemini response to her search query.

Whatever that says is hard fact as she's concerned. And she's no dummy -- she just has no clue how these things work. Oh, and Google told her so.

mcherm 3 hours ago | parent | prev [-]

That may be true, but the underlying problem is not that the LLMs are capable of accurately reporting information that is published in a single person's blog article. The underlying problem is that a portion of the population believes they are infallible.

crowbahr 3 hours ago | parent | prev | next [-]

If you give your assistant a task and they fall for obvious lies they won't be your assistant long. The point of an assistant is that you can trust them to do things for you.

LocalH 3 hours ago | parent | prev | next [-]

> People need to learn that LLMs are people too

LLMs are absolutely not people

consp 3 hours ago | parent | prev | next [-]

People have the ability to think critically, LLMs don't. Comparing them to people is giving them properties they do not possess. The fact people ignore thinking does not preclude them from being able to. The assistant got a lousy job and did it with the minimal effort possible to get away from it. None of these things apply or should apply to machines.

stavros 2 hours ago | parent [-]

LLMs are not machines in any sense of the word as we've been using it so far.

3 hours ago | parent | prev | next [-]
[deleted]
jml78 3 hours ago | parent | prev | next [-]

When the first 10 results on Google are AI generated and Google is providing an AI overview, this is an issue. We can say don’t use Google but we all know normal people all use Google due to habit

em-bee 2 hours ago | parent | prev | next [-]

i don't quite follow your argument, i think the opposite is true. you should trust LLMs LESS than any random person.

the problem is not whose fault it is. the problem is: are you even able to recognize that this information is wrong.

if it is not the assistants fault then clearly the answer is no. you are not blaming the assistant for not recognizing the error. but, that means that most other people will also not recognize the error. those who do recognize the error are only able to do so because they have additional information that most other people would not have.

i trust other humans because the cost of verifying everything is too expensive. this matters especially for information that is not of critical importance. getting some trivia wrong is at most embarrassing, it's not critical.

LLMs get stuff wrong more often than humans, and so the risk of getting a wrong answer is higher, and therefore checking is always necessary, but that negates the benefit of using them in the first place.

which means: you will only use LLMs if you intent to trust them. the same way i will only ask another human if i intent to trust them.

when i ask a human to give me some information, then i am not asking a random person, but i am asking a person that i believe can give me the right answer because they have the necessary experience, skill, knowledge to give that answer. when i am asking an LLM, i am asking with the same expectation, otherwise, why would i even bother?

it's not a question of infallibility. it's a question of usability. but to me, an LLM that is not infallible is also not usable.

the problem is that LLMs promise more than they can actually do, and this article is one way to expose that false promise. it is news because LLMs are news.

ThePowerOfFuet 3 hours ago | parent | prev [-]

>This is only an issue if [people] think LLMs are infallible.

I have some news for you.