Remix.run Logo
Decorative Cryptography(dlp.rip)
165 points by todsacerdoti 2 days ago | 48 comments
Muromec a day ago | parent | next [-]

I was dealing with a good old anti-tampering userspace library last week. They did everything right.

The process detects that it's traced (by asking the kernel nicely) and shuts down. So I patched the kernel and now I can connect with and poke around gdb.

I can't put a software breakpoint because the process computes checksum of it's memory and jumps through a table index computed from a hash, so I had to put the hardware read watchpoint on modified memory location, record who reads it and patch the jump index to the right one.

Of course, there is another function that checksums the memory and runs the process into sigsegv, it has tons of obfuscated confusing stuff, so I have to patch it with 'lol return 0'.

And then I can finally use frida to disable ssl pinning to mitmproxy it. It all took a week to bypass all the levels of obfuscation, find the actual thing I was looking for and extract it. Can't imagine how much time the people at $securitycompanyname spent on adding all those levels of obfuscation and anti-debug. More than a week for sure. What was it doing? A custom HOTP.

It wasn't any better on actual secure boots 20 years ago where bootloader checksummed the whole firmware before transferring control, because bootloader itself was in ROM and of course it had subtle logical bugs and you only need to find one and bootloader is there in ROM bugged forever.

nine_k a day ago | parent | next [-]

How many more amateur attempts did these layers thwart? Did its creators collect enough revenue before the crack was produced?

I suppose uncrackable software, in the sense of e.g. license protection, cannot exist. Software is completely beholden to hardware, and known hardware can be arbitrarily emulated, and there's nowhere to hide any tamper-resistant secret bits. Only in a combination with locked-down, uncrackable hardware can properly designed software without critical bugs remain uncrackable; see stuff like yubikeys. Similarly, communication can remain uncrackable as long as the secret bits (like a private key) remain secret.

Muromec a day ago | parent [-]

I'm not ever cracking anything, the software is free to use, I just wanted to mitmproxy it to see the requests and figure out some custom crypto inside of it

DenisM a day ago | parent | prev [-]

How was your experience with Xbox? I heard it was rather watertight?

Muromec a day ago | parent [-]

Why would I ever pay for anything microsoft made?

mmoustafa 2 days ago | parent | prev | next [-]

> All encryption is end-to-end, if you’re not picky about the ends.

This reminds of how Apple iMessage is E2E encrypted, but Apple runs on-device content detection that pings their servers, which you can't possibly even think of disabling. [1][2]

[1] https://sneak.berlin/20230115/macos-scans-your-local-files-n... [2] Investigation in Beeper/PyPush discord for iMessage spam blocking

saagarjha a day ago | parent | next [-]

What’s the concern here? The blog post you linked does not really support its claims with evidence.

perching_aix a day ago | parent [-]

They're actually two separate claims, one of which the blogpost does support. The other one is seemingly ought to be supported by some conversations on a Discord server.

The concern is obvious though, not sure what's unclear about that: it's a bit pointless to have E2EE, if the adversary has full access to one of the ends anyways.

xvector a day ago | parent | prev [-]

[1] is supposedly debunked: https://pawisoon.medium.com/debunked-the-truth-about-mediaan...

> the network traffic sent and received by mediaanalysisd was found to be empty and appears to be a bug.

I say "supposedly debunked" because empty traffic doesn't mean there's nothing going on. It could just be a file deemed safe. But then the author said:

> The network call that raised concerns is a bug. Apple has since released macOS 13.2, which has fixed this issue, and the process no longer makes calls to Apple servers

bjackman 2 days ago | parent | prev | next [-]

The phrase "threat model gerrymandering" is fantastic, is fantastic, I will be using that a lot I think.

ahoka 10 hours ago | parent [-]

Definitely the word of the day for me.

cryptonector a day ago | parent | prev | next [-]

> You need an integrated root-of-trust in your CPU in order to solve these.

Yes, quite. The BIOS/UEFI absolutely needs to store a public key of a primary key on the TPM, probably the EKpub itself for simplicity. Without that you will be vulnerable to an MITM attack, at least early in boot, and since the MITM could fool you about the root of trust for later, as long as the MITM can commit to always being there you cannot detect the attack.

ragebol 2 days ago | parent | prev | next [-]

I expected something about cryptography keys hidden in a decoration somewhere (kinda like LoTR Gate of Moria style), article was not quite what I expected. Although it is in a sense

nine_k a day ago | parent [-]

The Gate of Moria inscription was plaintext. The first person to not try to interpret it as a riddle solved it.

maccard a day ago | parent | prev | next [-]

> All encryption is end-to-end, if you’re not picky about the ends.

This is a great quote.

badcryptobitch a day ago | parent | prev | next [-]

> Unexplainable security features are just marketing materials. I feel this way about a lot of hardware-based security solutions like TPMs, and TEEs. These are actually useful solutions that can help solve problems that we have (as evidenced by this article) but unfortunately, these solutions tend to be poorly publicly documented. As a result, we rely on academics to do the work for us in order to learn how to better contextualize these solutions.

tucnak 2 days ago | parent | prev | next [-]

I find it surprising that IBM POWER9 had key imprints in 2017 (sic!!) and it's still nowhere to be found on contemporary CPU's...

fc417fc802 a day ago | parent [-]

POWER9 had quite a few neat things going on. I think it's unfortunate that it never became mainstream. The switch to closed source firmware in Power10 is also a downer.

dist-epoch a day ago | parent | prev | next [-]

> Active physical interposer adversaries are a very real part of legitimate threat models. You need an integrated root-of-trust in your CPU in order to solve these.

It's been almost 10 years since Microsoft, based on their Xbox experience, started saying "stop using discrete TPMs over the bus, they are impossible to secure, we need the TPM embedded in the CPU itself"

Tharre a day ago | parent | next [-]

The TPM itself can actually be discrete, as long as you have a root-of-trust inside the CPU with a unique secret. Derive a secret from the unique secret and the hash of the initial bootcode the CPU is running like HMAC(UDS, hash(program)) and derive a public/private key pair from that. Now you can just do normal Diffie-Hellman to negotiate encryption keys with the TPM and you're safe from any future interposers.

This matters because for some functionality you really want tamper-resistant persistent storage, for example "delete the disk encryption keys if I enter the wrong password 10 times". Fairly easy to do on a TPM that can be made on a process node that supports flash vs a general CPU where that just isn't an option.

RobotToaster a day ago | parent | prev [-]

That's assuming you trust the CPU vendor not to have their own interposer.

dist-epoch a day ago | parent [-]

If you don't trust the CPU vendor in your machine you have bigger problems.

RobotToaster a day ago | parent | next [-]

Given that the Intel ME and AMD PSP are both backdoors, we all have problems.

ahoka 10 hours ago | parent | next [-]

It’s only a backdoor if it’s undocumented.

commandersaki a day ago | parent | prev [-]

Who has the keys to this backdoor? [for the curious]

immibis a day ago | parent [-]

At a minimum, Intel and AMD.

commandersaki a day ago | parent [-]

What kind of keys are they? In that same regard, Apple holds the keys to sign software for secure enclaves on iDevices and Macs, does that make them backdoored, since they can control execution on the firmware that protects everyone's authentication data and secrets?

immibis a day ago | parent [-]

Yes, Apple products are backdoored - not just through esoteric keys, but also because they're uploading your pictures to the mothership "to check they're not hild porn."

commandersaki a day ago | parent [-]

because they're uploading your pictures to the mothership "to check they're not hild porn."

Citation needed.

Also, if virtually every software that is updateable by a vendor, then going by your argument, everything is a backdoor. Not a very useful term then.

LtWorf a day ago | parent | prev [-]

Yes we do have those big problems.

jll29 a day ago | parent | prev [-]

Given with yesterday's article on here about the issues of PGP, it looks like all software encryption short of a one-time pad are decorative.

I like the idea of a key part of the the CPU (comment below); does anyone know why Intel/ARM/AMD have not picked up this IBM feature?

tptacek a day ago | parent | next [-]

The logic you're using here is: if PGP is unsafe, all cryptography must be unsafe too? No, that doesn't hold, at all.

Retr0id a day ago | parent | prev | next [-]

Protecting secrets via hardware is always "decorative" in some sense, the question is just how much time+work it takes to extract them (and probability of destroying the secrets/device in the process). (outside of things like QKD)

But for software systems under a software threat model, bug-free implementations are possible, in theory at least.

rossjudson a day ago | parent | next [-]

This is a reasonable take.

Perfect security isn't a thing. Hardware/Software engineers are in the business of making compromise harder, but eyes are wide open about "perfection".

Confidential Computing is evolving, and it's steadily gotten much more difficult to bypass the security properties.

pjc50 a day ago | parent | prev [-]

I don't follow this - the software must necessarily run on some hardware, so while the software may be provably secure that doesn't help if an attacker can just pull key material off the bus?

formerly_proven a day ago | parent [-]

Soldering wires to LPC is not a software threat model

immibis a day ago | parent [-]

but it is a threat model. "This system is unhackable, if the user doesn't do the thing that hacks it" is not very useful.

bccdee a day ago | parent [-]

Okay, nothing is secure against every threat model. The only way to secure against rubber hose cryptanalysis is by hiring a team of bodyguards, and even that won't protect you from LEOs or nation-state actors. Your threat model should be broad enough to provide some safety, but it also needs to be narrow enough that you can do something about it. At a software level, there's only so much you can do to deal with hardware integrity problems. The rest, you delegate to the security team at your data centre.

> "This system is unhackable, if the user doesn't do the thing that hacks it" is not very useful.

It's the best you're gonna get, bud. Nothing's "unhackable"—you just gotta make "the thing that hacks it" hard to do.

lxgr a day ago | parent | prev | next [-]

What article?

In any case, I'm curious to hear your argument for how "PGP has some implementation problems" (unsurprising to most people that have dared to look at its internals even briefly) implies "all non-information-theoretic cryptography is futile".

maqp a day ago | parent | prev | next [-]

Except 99% of one-time pad implementations fail in at least one criteria:

* Using CSPRNGs instead of HWRNGs to generate the pads,

* Try to make it usable and share short entropy and reinvent stream ciphers,

* Share that short entropy over Diffie-Hellman RSA,

* Fail to use unconditionally secure message authentication,

* Re-use pads,

* Forget to overwrite pads,

* Fail to distribute pads off-band via sneakernet or dead drops or QKD.

OTP is also usually the first time someone dabbles in creating cryptographic code so the implementations are full of footguns.

dist-epoch a day ago | parent | prev [-]

What do you mean exactly? Both AMD/Intel have signed firmware, and both support hardware attestation, where they sign what they see with an AMD/Intel key and you can later check that signature. This is the basis of confidential VMs, where not even the machine physical owner can tamper with the VM in an undetectable way.

evan_a_a a day ago | parent [-]

I have bad news on that front.

https://tee.fail/

fc417fc802 a day ago | parent | next [-]

> While the data itself is encrypted, notice how the values written by the first and third operation are the same.

The fact that Intel and AMD both went with ECB leaves me with mild disbelief. I realize wrangling IVs in that scenario is difficult but that's hardly an excuse to release a product that they knew full well was fundamentally broken. The insecurity of ECB for this sort of task has been common knowledge for at least 2 decades.

rossjudson a day ago | parent | next [-]

Google "intel sgx memory encryption engine". Intel's designers were fully aware of replay attacks, and early versions of SGX supported a hardware-based memory encryption engine with Merkle tree support.

Remember that everything in security (and computation) is a tradeoff. The MEE turned out to be a performance bottleneck, and support got dropped.

There are legitimate choices to be made here between threat models, and the resulting implications on the designs.

There's not much new under the sun when it comes to security/cryptography/whatever (tm), and I recommend approaching the choices designers make with an open mind.

fc417fc802 a day ago | parent [-]

I agree with the sentiment but I'm struggling to see how this qualifies as a legitimate tradeoff to make. I thought the entire point of this feature was to provide assurances to customers that cloud providers weren't snooping on their VMs. In which case physically interdicting RAM in this manner is probably the first approach a realistic adversary would attempt.

I can see where it prevents inadvertent data leaks but the feature was billed as protecting against motivated adversaries. (Or at least so I thought.)

dist-epoch a day ago | parent | prev [-]

I don't think that's the issue. It seems it's the same memory address location, so an address/location based IV would have the same problem.

You need a sequence number to solve this, but they have no place where to store it.

fc417fc802 a day ago | parent [-]

Fair point, my ECB remark was misguided. But I think the broader point stands? I did acknowledge the difficulty of dealing with IVs here.

It's the same issue that XTS faces but that operates under the fairly modest assumption that an adversary won't have long term continuous block level access to the running system. Whereas in this case interdicting the bus is one of the primary attack vectors so failing to defend against that seems inexcusable.

lxgr a day ago | parent | prev [-]

Yes, trusted computing is empirically hard, but I haven't heard solid arguments either way on whether it's actually infeasible.