▲ | griomnib 4 days ago | |||||||||||||||||||||||||||||||||||||
Anthropic had a vested interest in people thinking Claude is reasoning. However, in coding tasks I’ve been able to find it directly regurgitating Stack overflow answers (like literally a google search turns up the code). Giving coding is supposed to be Claude’s strength, and it’s clearly just parroting web data, I’m not seeing any sort of “reasoning”. LLM may be useful but they don’t think. They’ve already plateaued, and given the absurd energy requirements I think they will prove to be far less impactful than people think. | ||||||||||||||||||||||||||||||||||||||
▲ | DiogenesKynikos 4 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||
The claim that Claude is just regurgitating answers from Stackoverflow is not tenable, if you've spent time interacting with it. You can give Claude a complex, novel problem, and it will give you a reasonable solution, which it will be able to explain to you and discuss with you. You're getting hung up on the fact that LLMs are trained on next-token prediction. I could equally dismiss human intelligence: "The human brain is just a biological neural network that is adapted to maximize the chance of creating successful offspring." Sure, but the way it solves that task is clearly intelligent. | ||||||||||||||||||||||||||||||||||||||
|