▲ | UltraSane 6 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
"It sounds like you're mostly just talking to yourself" No, Claude does know a LOT more than I do about most things and does push back on a lot of things. Sometimes I am able to improve my reasoning and other times I realize I was wrong. Trust me, I am aware of the linear algebra behind the curtain! But even when you mostly understand how they work the best LLMs today are very impressive. And latent spaces fundamentally new way to index data. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | furyofantares 6 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
You can talk to yourself while reading books and searching the web for information. I don't think the fact that you're learning from information the LLM is pulling in means you're really conversing with it. I do find LLMs very useful and am extremely impressed by them, I'm not saying you can't learn things this way at all. But there's nobody else on the line with you. And while they will emit text which contradicts what you say if it's wrong enough, they've been heavily trained to match where you're steering things, even if you're trying to avoid doing any steering. You can mostly understand how these work and still end up in a feedback loop that you don't realize is a feedback loop. I think this might even be more likely the more the thing has to offer you in terms of learning - the less qualified you are on the subject, the less you can tell when it's subtly yes-and'ing you. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | ceejayoz 6 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> No, Claude does know a LOT more than I do about most things… Plenty of people can confidently act like they know a lot without really having that knowledge. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|