Remix.run Logo
thenoblesunfish 4 hours ago

Okay, funny. What does it prove? Is this a more general issue? How would you make the model better?

Jean-Papoulos 4 hours ago | parent | next [-]

It proves that this is not intelligence. This is autocomplete on steroids.

hugh-avherald 3 hours ago | parent [-]

Humans make very similar errors, possibly even the exact same error, from time to time.

gitaarik 3 hours ago | parent | prev | next [-]

We make the model better by training it, and now that this issue has come up we can update the training ;)

cynicalsecurity 4 hours ago | parent | prev | next [-]

It proves LLMs always need context. They have no idea where your car is. Is it already there at the car wash and you simply get back from the gas station to wash it where you went shortly to pay for the car wash? Or is the car at your home?

It proves LLMs are not brains, they don't think. This question will be used to train them and "magically" they'll get it right next time, creating an illusion of "thinking".

ahtihn 3 hours ago | parent [-]

> They have no idea where your car is.

They could either just ask before answering or state their assumption before answering.

S3verin 3 hours ago | parent | prev [-]

For me this is just another hint on how careful one should be in deploying agents. They behave very unintuitively.