▲ | thayne 5 days ago | |||||||
> It's about tradoffs: It costs almost nothing to switch to PQC methods, It costs: - development time to switch things over - more computation, and thus more energy, because PQC algorithms aren't as efficient as classical ones - more bandwidth, because PQC algorithms require larger keys | ||||||||
▲ | throw0101a 5 days ago | parent | next [-] | |||||||
> It costs: Not wrong, but given these algorithms are mostly used at setup, how much cost is actually being occurred compared to the entire session? Certainly if your sessions are short-lived then the 'overhead' of PQC/hybrid is higher, but I'd be curious to know the actually byte and energy costs over and above non-PQC/hybrid, i.e., how many bytes/joules for a non-PQC exchange and how many more by adding PQC. E.g. > Unfortunately, many of the proposed post-quantum cryptographic primitives have significant drawbacks compared to existing mechanisms, in particular producing outputs that are much larger. For signatures, a state of the art classical signature scheme is Ed25519, which produces 64-byte signatures and 32-byte public keys, while for widely-used RSA-2048 the values are around 256 bytes for both. Compare this to the lowest security strength ML-DSA post-quantum signature scheme, which has signatures of 2,420 bytes (i.e., over 2kB!) and public keys that are also over a kB in size (1,312 bytes). For encryption, the equivalent would be comparing X25519 as a KEM (32-byte public keys and ciphertexts) with ML-KEM-512 (800-byte PK, 768-byte ciphertext). * https://neilmadden.blog/2025/06/20/are-we-overthinking-post-... "The impact of data-heavy, post-quantum TLS 1.3 on the Time-To-Last-Byte of real-world connections" (PDF): * https://csrc.nist.gov/csrc/media/Events/2024/fifth-pqc-stand... (And development time is also generally one-time.) | ||||||||
| ||||||||
▲ | djmdjm 5 days ago | parent | prev | next [-] | |||||||
> - development time to switch things over This is a one time cost, and generally the implementations we're switching to are better quality than the classical algorithms they replace. For instance, the implementation of ML-KEM we use in OpenSSH comes from Cryspen's libcrux[1], which is formally-verified and quite fast. [1] https://github.com/cryspen/libcrux > - more computation, and thus more energy, because PQC algorithms aren't as efficient as classical ones ML-KEM is very fast. In OpenSSH it's much faster than classic DH at the same security level and only slightly slower than ECDH/X25519. > - more bandwidth, because PQC algorithms require larger keys For key agreement, it's barely noticeable. ML-KEM public keys are slightly over 1Kb. Again this is larger than ECDH but comparable to classic DH. PQ signatures are larger, e.g. a ML-DSA signature is about 3Kb but again this only happens once or twice per SSH connection and is totally lost in the noise. | ||||||||
▲ | fxwin 5 days ago | parent | prev [-] | |||||||
all of which are costs that pale in comparison to having your data compromised, depending on what it is |