| ▲ | jll29 3 days ago |
| Given with yesterday's article on here about the issues of PGP, it looks like all software encryption short of a one-time pad are decorative. I like the idea of a key part of the the CPU (comment below); does anyone know why Intel/ARM/AMD have not picked up this IBM feature? |
|
| ▲ | tptacek 2 days ago | parent | next [-] |
| The logic you're using here is: if PGP is unsafe, all cryptography must be unsafe too? No, that doesn't hold, at all. |
|
| ▲ | Retr0id 3 days ago | parent | prev | next [-] |
| Protecting secrets via hardware is always "decorative" in some sense, the question is just how much time+work it takes to extract them (and probability of destroying the secrets/device in the process). (outside of things like QKD) But for software systems under a software threat model, bug-free implementations are possible, in theory at least. |
| |
| ▲ | rossjudson 2 days ago | parent | next [-] | | This is a reasonable take. Perfect security isn't a thing. Hardware/Software engineers are in the business of making compromise harder, but eyes are wide open about "perfection". Confidential Computing is evolving, and it's steadily gotten much more difficult to bypass the security properties. | |
| ▲ | pjc50 3 days ago | parent | prev [-] | | I don't follow this - the software must necessarily run on some hardware, so while the software may be provably secure that doesn't help if an attacker can just pull key material off the bus? | | |
| ▲ | formerly_proven 3 days ago | parent [-] | | Soldering wires to LPC is not a software threat model | | |
| ▲ | immibis 2 days ago | parent [-] | | but it is a threat model. "This system is unhackable, if the user doesn't do the thing that hacks it" is not very useful. | | |
| ▲ | bccdee 2 days ago | parent [-] | | Okay, nothing is secure against every threat model. The only way to secure against rubber hose cryptanalysis is by hiring a team of bodyguards, and even that won't protect you from LEOs or nation-state actors. Your threat model should be broad enough to provide some safety, but it also needs to be narrow enough that you can do something about it. At a software level, there's only so much you can do to deal with hardware integrity problems. The rest, you delegate to the security team at your data centre. > "This system is unhackable, if the user doesn't do the thing that hacks it" is not very useful. It's the best you're gonna get, bud. Nothing's "unhackable"—you just gotta make "the thing that hacks it" hard to do. |
|
|
|
|
|
| ▲ | dist-epoch 3 days ago | parent | prev | next [-] |
| What do you mean exactly? Both AMD/Intel have signed firmware, and both support hardware attestation, where they sign what they see with an AMD/Intel key and you can later check that signature. This is the basis of confidential VMs, where not even the machine physical owner can tamper with the VM in an undetectable way. |
| |
| ▲ | evan_a_a 3 days ago | parent [-] | | I have bad news on that front. https://tee.fail/ | | |
| ▲ | fc417fc802 2 days ago | parent | next [-] | | > While the data itself is encrypted, notice how the values written by the first and third operation are the same. The fact that Intel and AMD both went with ECB leaves me with mild disbelief. I realize wrangling IVs in that scenario is difficult but that's hardly an excuse to release a product that they knew full well was fundamentally broken. The insecurity of ECB for this sort of task has been common knowledge for at least 2 decades. | | |
| ▲ | rossjudson 2 days ago | parent | next [-] | | Google "intel sgx memory encryption engine". Intel's designers were fully aware of replay attacks, and early versions of SGX supported a hardware-based memory encryption engine with Merkle tree support. Remember that everything in security (and computation) is a tradeoff. The MEE turned out to be a performance bottleneck, and support got dropped. There are legitimate choices to be made here between threat models, and the resulting implications on the designs. There's not much new under the sun when it comes to security/cryptography/whatever (tm), and I recommend approaching the choices designers make with an open mind. | | |
| ▲ | fc417fc802 2 days ago | parent [-] | | I agree with the sentiment but I'm struggling to see how this qualifies as a legitimate tradeoff to make. I thought the entire point of this feature was to provide assurances to customers that cloud providers weren't snooping on their VMs. In which case physically interdicting RAM in this manner is probably the first approach a realistic adversary would attempt. I can see where it prevents inadvertent data leaks but the feature was billed as protecting against motivated adversaries. (Or at least so I thought.) |
| |
| ▲ | dist-epoch 2 days ago | parent | prev [-] | | I don't think that's the issue. It seems it's the same memory address location, so an address/location based IV would have the same problem. You need a sequence number to solve this, but they have no place where to store it. | | |
| ▲ | fc417fc802 2 days ago | parent [-] | | Fair point, my ECB remark was misguided. But I think the broader point stands? I did acknowledge the difficulty of dealing with IVs here. It's the same issue that XTS faces but that operates under the fairly modest assumption that an adversary won't have long term continuous block level access to the running system. Whereas in this case interdicting the bus is one of the primary attack vectors so failing to defend against that seems inexcusable. |
|
| |
| ▲ | lxgr 3 days ago | parent | prev [-] | | Yes, trusted computing is empirically hard, but I haven't heard solid arguments either way on whether it's actually infeasible. |
|
|
|
| ▲ | lxgr 3 days ago | parent | prev | next [-] |
| What article? In any case, I'm curious to hear your argument for how "PGP has some implementation problems" (unsurprising to most people that have dared to look at its internals even briefly) implies "all non-information-theoretic cryptography is futile". |
|
| ▲ | maqp 2 days ago | parent | prev [-] |
| Except 99% of one-time pad implementations fail in at least one criteria: * Using CSPRNGs instead of HWRNGs to generate the pads, * Try to make it usable and share short entropy and reinvent stream ciphers, * Share that short entropy over Diffie-Hellman RSA, * Fail to use unconditionally secure message authentication, * Re-use pads, * Forget to overwrite pads, * Fail to distribute pads off-band via sneakernet or dead drops or QKD. OTP is also usually the first time someone dabbles in creating cryptographic code so the implementations are full of footguns. |