▲ | lukev 4 days ago | |
If we are going to create a binary of "understand LLMs" vs "do not understand LLMs", then one way to do it is as you describe; fully comprehending the latent space of the model so you know "why" it's giving a specific output. This is likely (certainly?) impossible. So not a useful definition. Meanwhile, I have observed a very clear binary among people I know who use LLMs; those who treat it like a magic AI oracle, vs those who understand the autoregressive model, the need for context engineering, the fact that outputs are somewhat random (hallucinations exist), setting the temperature correctly... | ||
▲ | kiitos 4 days ago | parent [-] | |
> If we are going to create a binary of "understand LLMs" vs "do not understand LLMs", "we" are not, what i quoted and replied-to did! i'm not inventing strawmen to yell at, i'm responding to claims by others! |