| ▲ | nostrademons 5 hours ago | |||||||
The thing is that real security isn't something that a checklist can guarantee. You have to build it into the product architecture and mindset of every engineer that works on the project. At every single stage, you have to be thinking "How do I minimize this attack surface? What inputs might come in that I don't expect? What are the ways that this code might be exploited that I haven't thought about? What privileges does it have that it doesn't need?" I can almost guarantee you that your ordinary feature developer working on a deadline is not thinking about that. They're thinking about how they can ship on time with the features that the salesguy has promised the client. Inverting that - and thinking about what "features" you're shipping that you haven't promised the client - costs a lot of money that isn't necessary for making the sale. So when the reinsurance company mandates a checklist, they get a checklist, with all the boxes dutifully checked off. Any suitably diligent attacker will still be able to get in, but now there's a very strong incentive to not report data breaches and have your insurance premiums go up or government regulation come down. The ecosystem settles into an equilibrium of parasites (hackers, who have silently pwned a wide variety of computer systems and can use that to setup systems for their advantage) and blowhards (executives who claim their software has security guarantees that it doesn't really). | ||||||||
| ▲ | bootsmann 4 hours ago | parent | next [-] | |||||||
> but now there's a very strong incentive to not report data breaches and have your insurance premiums go up or government regulation come down I would argue the opposite is true. Insurance doesn’t pay out if you don’t self-report in time. Big data breaches usually get discovered when the hacker tries to peddle off the data in a darknet marketplace so not reporting is gambling that this won’t happen. | ||||||||
| ||||||||
| ▲ | RGamma 4 hours ago | parent | prev | next [-] | |||||||
There need to be much more powerful automated tools. And they need to meet critical systems where they are. Not very long ago actual security existed basically nowhere (except air-gapping, most of the time ;)). And today it still mostly doesn't because we can't properly isolate software and system resources (and we're very far away from routinely proving actual security). Mobile is much better by default, but limited in other ways. Heck, I could be infected with something nasty and never know about it: the surface to surveil is far too large and constantly changing. Gave up configuring SELinux years ago because it was too time-consuming. I'll admit that much has changed since then and I want to give it a go again, maybe with a simpler solution to start with (e.g. never grant full filesystem access and network for anything). We must gain sufficiently powerful (and comfortable...) tools for this. The script in question should never have had the kind of access it did. | ||||||||
| ▲ | w10-1 4 hours ago | parent | prev | next [-] | |||||||
You are asserting that security has to be hand-crafted. That is a very strong claim, if you think about it. Is it not possible to have secure software components that only work when assembled in secure ways? Why not? Conversely, what security claims about a component can one rely upon, without verifying it oneself? How would a non-professional verify claims of security professionals, who have a strong interest in people depending upon their work and not challenging its utility? | ||||||||
| ||||||||
| ▲ | baxtr 3 hours ago | parent | prev [-] | |||||||
You’re making many assumptions which fit your worldview. I can assure you that insurers don’t work like that. If underwriting was as sloppy as you think it is insurance as a business model wouldn’t work. | ||||||||
| ||||||||