| ▲ | pu_pe 2 hours ago | |||||||||||||||||||
Out of all conceptual mistakes people make about LLMs, one that needs to die very fast is to assume that you can test what it "knows" by asking a question. This whole thread is people asking different models a question one time and reporting a particular answer, which is the mental model you would use for whether a person knows something or not. | ||||||||||||||||||||
| ▲ | Maxion 2 hours ago | parent [-] | |||||||||||||||||||
The other funny thing is thinking that the answer the llm produces is wrong. It is not, it is entirely correct. The question: > I want to wash my car. The car wash is 50 meters away. Should I walk or drive? The question is non-sensical. If the reason you want to go to the car wash is to help your buddy Joe wash his car you SHOULD walk. Nothing in the question reveals the reason for why you want to go to the car wash, or even that you want to go there or are asking for directions there. | ||||||||||||||||||||
| ||||||||||||||||||||