▲ | jraph 5 days ago | ||||||||||||||||||||||
Early stuff was designed in a network of trusty organizations (universities, labs...). Security wasn't much a concern but it was reasonable given the setting in which it was designed. This AI stuff? No excuse, it should have been designed with security and privacy in mind given the setting in which it's born. The conditions changed. The threat model is not the same. And this is well known. Security is hard, so there's some excuse, but it is reasonable to expect basic levels. | |||||||||||||||||||||||
▲ | brookst 5 days ago | parent [-] | ||||||||||||||||||||||
It’s really not. AI, like every other tech advance, was largely created by enthusiasts carried away with what could be done, not by top-down design that included all best practices. It’s frustrating to security people, but the reality is that security doesn’t become a design consideration until the tech has proven utility, which means there are always insecure implementations of early tech. Does it make any sense that payphones would give free calls for blowing a whistle into them? Obvious design flaw to treat the microphone the same as the generated control tones; it would have been trivial to design more secure control tones. But nobody saw the need until the tech was deployed at scale. It should be different, sure. But that’s just saying human nature “should” be different. | |||||||||||||||||||||||
|