Remix.run Logo
totetsu 2 hours ago

But what is it about this specific question that puts it at the edges of what LLM can do? .. That, it's semantically leading to a certain type of discussion, so statistically .. that discussion of weighing pros and cons .. will be generated with high chance.. and the need of a logical model of the world to see why that discussion is pointless.. that is implicitly so easy to grasp for most humans that it goes un-stated .. so that its statistically un-likely to be generated..

grey-area an hour ago | parent | next [-]

The answer is quite simple:

It’s not in the training data.

These models don’t think.

GeoAtreides 14 minutes ago | parent [-]

no, no, in this case, that's the thing, it is in the training data

just heavily (heavily!) biased towards walking

grey-area 6 minutes ago | parent [-]

This particular situation is not in the training data, though I’m sure it will be soon to try to shore up claims of ‘reasoning’.

conductr 2 hours ago | parent | prev [-]

> that is implicitly so easy to grasp for most humans

I feel like this is the trap. You’re trying to compare it to a human. Everyone seems to want to do that. But it’s quite simple to see LLMs are quite far still from being human. The can be convincing at the surface level but there’s a ton of nuance that just shouldn’t be expected. It’s a tool that’s been tuned and with that tuning some models will do better than others but just expecting to get it right and be more human is unrealistic.