| ▲ | datsci_est_2015 4 hours ago | |
There’s a large difference between understanding precisely what some code does and understanding what code intends to do. It’s why “what happens when you begin typing into your web browsers address bar?” is such a powerful question for weeding out low quality interview candidates. I’ve never worked at Google, but I can talk about how they probably handle the incoming requests. I’ve never worked on Windows OS-level software, but I can start talking about input buffers. Kind of reminds me of WIRED’s “5 Levels” series… Anyway, my point is prompts are non deterministic and there’s no way of inferring what code output by an LLM is intended to do because that’s not how LLMs work | ||
| ▲ | jcgrillo 4 hours ago | parent [-] | |
> because that’s not how LLMs work It's almost impossible to have a rational discussion about the effects of this technology because this point is so easily lost. Even super smart, credentialed, expert people easily (and often!) fall into the trap of anthropomorphizing the bot because it makes human noises. It's really important to remember the mechanical principles underlying its function. No different from any other computer program in that respect, the difference is the psychological hold it gets on the user. There is no intention behind its actions, but it's very easy to hallucinate one because with every other thing that speaks human language there is some intention behind the words and actions. | ||