| ▲ | SahAssar 4 hours ago | |||||||
> So perhaps they should be. Unless both the legalities and technology radically change they will not be. And the companies building them will not take on the burden since the technology has proved to be so unpredictable (partially by design) and unsafe. > designed to be more like little people on a chip - and need to be treated accordingly Deeply unpredictable and unsafe people on a chip, so not the sort that I generally want to trust secrets with. I don't think it's that complex, you can have secure systems or you can have current gen LLMs. You can't have both in the same place. | ||||||||
| ▲ | TeMPOraL 4 hours ago | parent [-] | |||||||
> Deeply unpredictable and unsafe people on a chip, so not the sort that I generally want to trust secrets with. Very true when comparing to acquaintances, but at a scale of any company or system except the tiniest ones, you can't blindly trust people in general either. Building systems involving people and LLMs is pretty similar. > I don't think it's that complex, you can have secure systems or you can have current gen LLMs. You can't have both in the same place. That is, indeed, the key. My point is that, unlike the popular opinion in threads like this, it does not follow that we need to give up on LLMs, or that we need to fix the security issues. The former is undesirable, the latter is fundamentally impossible. What we need is what we've been doing ever since civilization took shape, ever since we've started building machines: recognize that automatons and people are different kinds of components, with different reliability and security characteristics. You can't blindly substitute one for the other, but there are ways to make them work together. Most systems we've created are of that nature. What people still get wrong is treating LLMs as "automatons" components. They're not, they're "people" components. | ||||||||
| ||||||||