▲ | DiogenesKynikos 4 days ago | ||||||||||||||||||||||||||||
The claim that Claude is just regurgitating answers from Stackoverflow is not tenable, if you've spent time interacting with it. You can give Claude a complex, novel problem, and it will give you a reasonable solution, which it will be able to explain to you and discuss with you. You're getting hung up on the fact that LLMs are trained on next-token prediction. I could equally dismiss human intelligence: "The human brain is just a biological neural network that is adapted to maximize the chance of creating successful offspring." Sure, but the way it solves that task is clearly intelligent. | |||||||||||||||||||||||||||||
▲ | griomnib 4 days ago | parent [-] | ||||||||||||||||||||||||||||
I’ve literally spent 100s of hours with it. I’m mystified why so many people use the “you’re holding it wrong” explanation when somebody points out real limitations. | |||||||||||||||||||||||||||||
|