Remix.run Logo
Retr0id 3 days ago

Protecting secrets via hardware is always "decorative" in some sense, the question is just how much time+work it takes to extract them (and probability of destroying the secrets/device in the process). (outside of things like QKD)

But for software systems under a software threat model, bug-free implementations are possible, in theory at least.

rossjudson 2 days ago | parent | next [-]

This is a reasonable take.

Perfect security isn't a thing. Hardware/Software engineers are in the business of making compromise harder, but eyes are wide open about "perfection".

Confidential Computing is evolving, and it's steadily gotten much more difficult to bypass the security properties.

pjc50 3 days ago | parent | prev [-]

I don't follow this - the software must necessarily run on some hardware, so while the software may be provably secure that doesn't help if an attacker can just pull key material off the bus?

formerly_proven 3 days ago | parent [-]

Soldering wires to LPC is not a software threat model

immibis 2 days ago | parent [-]

but it is a threat model. "This system is unhackable, if the user doesn't do the thing that hacks it" is not very useful.

bccdee 2 days ago | parent [-]

Okay, nothing is secure against every threat model. The only way to secure against rubber hose cryptanalysis is by hiring a team of bodyguards, and even that won't protect you from LEOs or nation-state actors. Your threat model should be broad enough to provide some safety, but it also needs to be narrow enough that you can do something about it. At a software level, there's only so much you can do to deal with hardware integrity problems. The rest, you delegate to the security team at your data centre.

> "This system is unhackable, if the user doesn't do the thing that hacks it" is not very useful.

It's the best you're gonna get, bud. Nothing's "unhackable"—you just gotta make "the thing that hacks it" hard to do.