Remix.run Logo
pu_pe 2 hours ago

Out of all conceptual mistakes people make about LLMs, one that needs to die very fast is to assume that you can test what it "knows" by asking a question. This whole thread is people asking different models a question one time and reporting a particular answer, which is the mental model you would use for whether a person knows something or not.

Maxion 2 hours ago | parent [-]

The other funny thing is thinking that the answer the llm produces is wrong. It is not, it is entirely correct.

The question: > I want to wash my car. The car wash is 50 meters away. Should I walk or drive?

The question is non-sensical. If the reason you want to go to the car wash is to help your buddy Joe wash his car you SHOULD walk. Nothing in the question reveals the reason for why you want to go to the car wash, or even that you want to go there or are asking for directions there.

drawfloat 2 hours ago | parent | next [-]

It explicitly says you want to wash your car in the statement. Maybe it's not just LLMs struggling with a fairly basic question...

ninjagoo 2 hours ago | parent | prev | next [-]

> The question is non-sensical.

Sure, from a pure logic perspective the second statement is not connected to the first sentence, so drawing logical conclusions isn't feasible.

In everyday human language though, the meaning is plain, and most people would get it right. Even paid versions of LLMs, being language machines, not logic machines, get it right in the average human sense.

As an aside, it's an interesting thought exercise to wonder how much the first ai winter resulted from going down the strict logic path vs the current probabilistic path.

TZubiri 2 hours ago | parent | prev [-]

>I want to wash MY car

>you want to go to the car wash is to help your buddy Joe wash HIS car

nope, question is pretty clear, however I will grant that it's only a question that would come up when "testing" the AI rather than a question that might genuinely arise.