| ▲ | ArcHound 3 days ago | |||||||||||||||||||||||||
I'm sorry, but the rule of two is just not enough, not even as a rule of thumb. We know how to work with security risks, the issue is they depend both on the business and the technicalities. This can actually do a lot of harm as security now needs to dispel this "great approach" to ignoring security that is supported by a "research paper they read". Please don't try to reinvent the wheel and if you do, please learn about the current state (Chesterton's fence and all that). | ||||||||||||||||||||||||||
| ▲ | jFriedensreich 3 days ago | parent [-] | |||||||||||||||||||||||||
Can you explain what you mean? How is Chesterton's fence applied to AI security helpful here? Are you just talking about not removing the "Non-AI" security architecture of the software itself? I think no one ever proposed that? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||