| ▲ | totetsu 2 hours ago | ||||||||||||||||
But what is it about this specific question that puts it at the edges of what LLM can do? .. That, it's semantically leading to a certain type of discussion, so statistically .. that discussion of weighing pros and cons .. will be generated with high chance.. and the need of a logical model of the world to see why that discussion is pointless.. that is implicitly so easy to grasp for most humans that it goes un-stated .. so that its statistically un-likely to be generated.. | |||||||||||||||||
| ▲ | grey-area an hour ago | parent | next [-] | ||||||||||||||||
The answer is quite simple: It’s not in the training data. These models don’t think. | |||||||||||||||||
| |||||||||||||||||
| ▲ | conductr 2 hours ago | parent | prev [-] | ||||||||||||||||
> that is implicitly so easy to grasp for most humans I feel like this is the trap. You’re trying to compare it to a human. Everyone seems to want to do that. But it’s quite simple to see LLMs are quite far still from being human. The can be convincing at the surface level but there’s a ton of nuance that just shouldn’t be expected. It’s a tool that’s been tuned and with that tuning some models will do better than others but just expecting to get it right and be more human is unrealistic. | |||||||||||||||||