| ▲ | thenoblesunfish 4 hours ago | |||||||
Okay, funny. What does it prove? Is this a more general issue? How would you make the model better? | ||||||||
| ▲ | Jean-Papoulos 4 hours ago | parent | next [-] | |||||||
It proves that this is not intelligence. This is autocomplete on steroids. | ||||||||
| ||||||||
| ▲ | gitaarik 3 hours ago | parent | prev | next [-] | |||||||
We make the model better by training it, and now that this issue has come up we can update the training ;) | ||||||||
| ▲ | cynicalsecurity 4 hours ago | parent | prev | next [-] | |||||||
It proves LLMs always need context. They have no idea where your car is. Is it already there at the car wash and you simply get back from the gas station to wash it where you went shortly to pay for the car wash? Or is the car at your home? It proves LLMs are not brains, they don't think. This question will be used to train them and "magically" they'll get it right next time, creating an illusion of "thinking". | ||||||||
| ||||||||
| ▲ | S3verin 3 hours ago | parent | prev [-] | |||||||
For me this is just another hint on how careful one should be in deploying agents. They behave very unintuitively. | ||||||||