| ▲ | JoshTriplett 4 hours ago | |||||||||||||||||||||||||
And that's one of many fatal problems with LLMs. A system that executes instructions from the data stream is fundamentally broken. | ||||||||||||||||||||||||||
| ▲ | TeMPOraL 4 hours ago | parent [-] | |||||||||||||||||||||||||
That's not a bug, that's a feature. It's what makes the system general-purpose. Data/control channel separation is an artificial construct induced mechanically (and holds only on paper, as long as you're operating within design envelope - because, again, reality doesn't recognize the distinction between "code" and "data"). If such separation is truly required, then general-purpose components like LLMs or people are indeed a bad choice, and should not be part of the system. That's why I insist that anthropomorphising LLMs is actually a good idea, because it gives you better high-order intuition into them. Their failure modes are very similar to those of people (and for fundamentally the same reasons). If you think of a language model as tiny, gullible Person on a Chip, it becomes clear what components of an information system it can effectively substitute for. Mostly, that's the parts of systems done by humans. We have thousands of years of experience building systems from humans, or more recently, mixing humans and machines; it's time to start applying it, instead of pretending LLMs are just regular, narrow-domain computer programs. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||