| ▲ | wrs 5 hours ago | |
Very important distinction here you’re missing: they don’t know things, they generate plausible things. The better the training, the more those are similar, but they never converge to identity. It’s like if you asked me to explain the S3 API, and I’m not allowed to say “I don’t know”, I’m going to get pretty close, but you won’t know what I got wrong until you read the docs. The ability for LLMs to search out the real docs on something and digest them is the fix for this, but don’t start thinking you (and the LLM) don’t need the real docs anymore. That said, it’s always been a human engineer superpower to know just enough about everything to know what you need to look up, and LLMs are already pretty darn good at that, which I think is your real point. | ||