| ▲ | fl7305 11 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> LLMs don’t really reason Do you have a test for this? Or is it based on the presumption that reasoning skills cannot evolve, it can only be the result of "intelligent design"? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | jbritton 10 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I have been spent many hours with them on coding tasks. As things currently stand once context or complexity reaches a certain point they become completely incapable of solving problems and that point occurs on very simple things. They appear completely brain dead at times although they are magnificent liars at making you think they understand the problem. Although, I recently got chatGPT 5 to solve a problem in a couple hours that Claude Sonnet 4 was simply never going to solve. So they are improving. I don’t know the limits. I’m more hopeful that a feedback loop with specialized agents will take things much further. I’m extremely skeptical that getting bigger context windows and larger models is going to get us reasoning. The skepticism comes from observations. Clearly no one knows how thinking actually works. I don’t know how to address the evolve part. LLMs don’t directly mutate and have selective pressure like living organisms. Maybe a simulation could be made to do that. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||