| ▲ | Night_Thastus 3 hours ago | |||||||
It's...interesting but I feel like people keep forgetting that LLMs like Claude don't really...think(?). Or learn. Or know what 'corn' or a 'tractor' is. They don't really have any memory of past experiences or a working memory of some current state. They're (very impressive) next word predictors. If you ask it 'is it time to order more seeds?' and the internet is full of someone answering 'no' - that's the answer it will provide. It can't actually understand how many there currently are, the season, how much land, etc, and do the math itself to determine whether it's actually needed or not. You can babysit it and engineer the prompts to be as leading as possible to the answer you want it to give - but that's about it. | ||||||||
| ▲ | jablongo 2 hours ago | parent [-] | |||||||
I think you could have credibly said this for a while during 2024 and earlier, but there is a lot of research that indicates LLMs are more than stochastic parrots, as some researchers claimed earlier on. Souped up versions of LLMs have performed at the gold medal level in the IMO, which should give you pause in dismissing them. "It can't actually understand how many there currently are, the season, how much land, etc, and do the math itself to determine whether it's actually needed or not" --- modern agents actually can do this. | ||||||||
| ||||||||