| ▲ | zahllos 3 hours ago |
| In context, this particular issue is that DJB disagrees with the IETF publishing an ML-KEM only standard for key exchange. Here's the thing. The existence of a standard does not mean we need to use it for most of the internet. There will also be hybrid standards, and most of the rest of us can simply ignore the existence of ML-KEM -only. However, NSA's CNSA 2.0 (commercial cryptography you can sell to the US Federal Government) does not envisage using hybrid schemes. So there's some sense in having a standard for that purpose. Better developed through the IETF than forced on browser vendors directly by the US, I think. There was rough consensus to do this. Should we have a single-cipher kex standard for HQC too? I'd argue yes, and no the NSA don't propose to use it (unless they updated CNSA). The requirement of the NIST competition is that all standardized algorithms are both classical and PQ-resistant. Some have said in this thread that lattice crypto is relatively new, but it actually has quite some history, going back to Atjai in '97. If you want paranoia, there's always code theory based schemes going back to around '75. We don't know what we don't know, which is why there's HQC (code based) waiting on standardisation and an additional on-ramp for signatures, plus the expensive (size and sometimes statefulness) of hash-based options. So there's some argument that single-cipher is fine, and we have a whole set of alternative options. This particular overreaction appears to be yet another in a long running series of... disagreements with the entire NIST process, including "claims" around the security level of what we then called Kyber, insults to the NIST team's security level estimation in the form of suggesting they can't do basic arithmetic (given we can't factor anything bigger than 15 on a real quantum computer and we simply don't have hardware anywhere near breaking RSA, estimate is exactly what these are) and so on. |
|
| ▲ | HelloNurse 3 hours ago | parent | next [-] |
| The metaphor near the beginning of the article is a good summary: standardizing cars with seatbelts, but also cars without seatbelts. Since ML-KEM is supported by the NSA, it should be assumed to have a NSA-known backdoor that they want to be used as much as possible: IETF standardization is a great opportunity for a long term social engineering operation, much like DES, Clipper, the more recent funny elliptic curve, etc. |
| |
| ▲ | blintz an hour ago | parent | next [-] | | > Since ML-KEM is supported by the NSA, it should be assumed to have a NSA-known backdoor that they want to be used as much as possible AES and RSA are also supported by the NSA, but that doesn’t mean they were backdoored. | | |
| ▲ | HelloNurse an hour ago | parent | next [-] | | AES and RSA had enough public scrutiny to make backdooring backdoors imprudent. The standardization of an obviously weaker option than more established ones is difficult to explain with security reasons, so the default assumption should be that there are insecurity reasons. | |
| ▲ | zahllos an hour ago | parent | prev [-] | | SHA-2 was designed by the NSA. Nobody is saying there is a backdoor. | | |
| ▲ | basilgohar 10 minutes ago | parent [-] | | I think it's established that NSA backdoors things. It doesn't mean they backdoor everything. But scrutiny is merited for each new thing NSA endorses and we have to wonder and ask why, and it's enough that if we can't explain why something is a certain way and not another, it's not improbable that we should be cautious of that and call it out. This is how they've operated for decades. |
|
| |
| ▲ | zahllos 2 hours ago | parent | prev | next [-] | | I will reply directly r.e. the analogy itself here. It is a poor one at best, because it assumes ML-KEM is akin to "internetting without cryptography". It isn't. If you want a better analogy, we have a seatbelt for cars right now. It turns out when you steal plutonium and hot-rod your DeLorean into a time machine, these seatbelts don't quite cut the mustard. So we need a new kind of seatbelt. We design one that should be as good for the school run as it is for time travel to 1955. We think we've done it but even after extensive testing we're not quite sure. So the debate is whether to put on two seatbelts (one traditional one we know works for traditional driving, and one that should be good for both) or if we can just use the new one on the school run and for going to 1955. We are nowhere near DeLoreans that can travel to 1955 either. | |
| ▲ | an hour ago | parent | prev | next [-] | | [deleted] | |
| ▲ | MYEUHD 2 hours ago | parent | prev [-] | | > the more recent funny elliptic curve Can you elaborate please? | | |
| ▲ | zahllos 2 hours ago | parent | next [-] | | The commentor means Dual_EC, a random number generator. The backdoor was patented under the form of "escrow" here: https://patents.google.com/patent/US8396213B2/en?oq=USOO83.9... - replace "escrow" with "backdoor" everywhere in the text and what was done will fall out. ML-KEM/ML-DSA were adapted into standards by NIST, but I don't think a single American was involved in the actual initial design. There might be some weakness the NSA knows about that the rest of us don't, but the fact they're going ahead and recommending these be used for US government systems suggests they're fine with it. Unless they want to risk this vulnerability also being discovered by China/Russia and used to read large portions of USG internet traffic. In their position I would not be confident that if I was aware of a vulnerability it would remain secret, although I am not a US Citizen or even resident, and never have been. | | |
| ▲ | johncolanduoni 33 minutes ago | parent [-] | | Not that I think this is the case for this algorithm, but backdoors like the one in Dual_EC cannot be used by a third party without what is effectively reversing an asymmetric key pair. Their public parameters are the product of private parameters that the NSA potentially has, but if China or whoever can calculate the private parameters from the public ones it’s broken regardless. | | |
| ▲ | zahllos 16 minutes ago | parent [-] | | Indeed. Dual_EC was a NOBUS backdoor relying on the ECDLP. That's fair. My point was more that it looked suspicious at the time (why use a trapdoor in a CSPRNG) and at least the possibility of "escrow" was known, as evidenced by the fact that Vanstone (one of the inventors of elliptic curve cryptography) patented said backdoor around 2006. This suspiciousness simply doesn't apply to ML-KEM, if one ignores one very specific cryptographer. |
|
| |
| ▲ | rdtsc 2 hours ago | parent | prev [-] | | Not op, but they probably meant https://en.wikipedia.org/wiki/Dual_EC_DRBG |
|
|
|
| ▲ | adgjlsfhk1 2 hours ago | parent | prev | next [-] |
| The problem with standardizing bad crypto options is that you are then exposed to all sorts of downgrade attack possibilities. There's a reason TLS1.3 removed all of the bad crypto algorithms that it had supported. |
| |
| ▲ | ekr____ an hour ago | parent | next [-] | | There were a number of things going on with TLS 1.3 and paring down the algorithm list. First, we both wanted to get rid of static RSA and standardize on a DH-style exchange. This also allowed us to move the first encrypted message in 1-RTT mode to the first flight from the server. You'll note that while TLS 1.3 supports KEMs for PQ, they are run in the opposite direction from TLS 1.2, with the client supplying the public key and the server signing the transcript, just as with DH. Second, TLS 1.3 made a number of changes to the negotiation which necessitated defining new code points, such as separating symmetric algorithm negotiation from asymmetric algorithm negotiation. When those new code points were defined, we just didn't register a lot of the older algorithms. In the specific case of symmetric algorithms, we also only. use AEAD-compatible encryption, which restricted the space further. Much of the motivation here was security, but it was also about implementation convenience because implementers didn't want to support a lot of algorithms for TLS 1.3. It's worth noting that at roughly the same time, TLS relaxed the rules for registering new code points, so that you can register them without an RFC. This allows people to reserve code points for their own usage, but doesn't require the IETF to get involved and (hopefully) reduces pressure on other implementers to actually support those code points. | |
| ▲ | blintz 2 hours ago | parent | prev [-] | | TLS 1.3 did do that, but it also fixed the ciphersuite negotiation mechanism (and got formally verified). So downgrade attacks are a moot point now. |
|
|
| ▲ | crote an hour ago | parent | prev | next [-] |
| > In context, this particular issue is that DJB disagrees with the IETF publishing an ML-KEM only standard for key exchange. No, that's background dressing by now. The bigger issue is how IETF is trying to railroad a standard by violating its own procedures, ignoring all objections, and banning people who oppose it. They are literally doing the kind of thing we always accuse China of doing. ML-KEM-only is obviously being pushed for political reasons. If you're not willing to let a standard be discussed on its technical merits, why even pretend to have a technology-first industry working group? Seeing standards being corrupted like this is sickening. At least have the gall openly claim it should be standardized because it makes things easier for the NSA - and by extension (arguably) increasing national security! |
|
| ▲ | vorpalhex 2 hours ago | parent | prev | next [-] |
| The standard will be used, as it was the previous time the IETF allowed the NSA to standardize a known weak algorithm. Sorry that someone calling out a math error makes the NIST team feel stupid. Instead of dogpiling the person for not stroking their ego, maybe they should correct the error. Last I checked, a quantum computer wasn't needed to handle exponents, a whiteboard will do. |
| |
| ▲ | zahllos 2 hours ago | parent [-] | | ML-KEM and ML-DSA are not "known weak". The justification for hybrid crypto is that they might have classical cryptanalytical results we aren't aware of, although there's a hardness reduction for lattice problems showing they're NP-hard, while we only suspect RSA+DLog are somewhere in NP. That's reasonable as a maximal-safety measure, but comes with additional cost. Obviously the standard will be used. As I said in a sibling comment, the US Government fully intends to do this whether the IETF makes a standard or not. |
|
|
| ▲ | aaomidi 3 hours ago | parent | prev [-] |
| Except when the government starts then mandating a specific algorithm. And yes. This has happened. There’s a reason there’s only the NIST P Curves in the WebPKI world. |
| |
| ▲ | zahllos 3 hours ago | parent [-] | | "The government" already have. That's what CNSA 2.0 means - this is the commercial crypto NSA recommend for the US Government and what will be in FIPS/CAVP/CMVP. ML-KEM-only for most key exchange. In this context, it is largely irrelevant whether the IETF chooses or not to have a single-standard draft. There's a code point from IANA to do this in TLS already and it will happen for US Government systems. I'd also add that personally I consider NIST P-Curves to be absolutely fine crypto. Complete formula exist, so it's possible to have failure-free ops, although point-on-curve needs to be checked. They don't come with the small-order subgroup problem of any Montgomery curve. ECDSA isn't great alone, the hedged variants from RFC 6979 and later drafts should be used. Since ML-KEM is key exchange, X25519 is very widely used in TLS unless you need to turn it off for FIPS. For the certificate side, the actual WebPKI, I'm going to say RSA wins out (still) (I think). |
|