Remix.run Logo
AlotOfReading 5 hours ago

I agree with the article, but I think it could go farther. Instead of having primitives for every 32/48/64/122 bit block, we need good format-preserving encryption. Then all of this advice boils down to "use as many bits as you need" and we can keep using the standard primitives with hardware support. If you need more security in the future, you only need to decrypt and reencrypt with the new size.

Dylan16807 4 hours ago | parent | next [-]

Small sizes have to be used with extra care, so I wouldn't want to make a generic function for all sizes. For bigger sizes we already have nice functions that take care of everything.

AlotOfReading 2 hours ago | parent [-]

The article lays out exactly why you'd want small sizes, even with the risks. The good qualifier just means that it'd have to be no riskier than any other algorithm at the same length.

bflesch 4 hours ago | parent | prev [-]

Are you suggesting a very large custom blocksize? I don't think this would be feasible beyond a few megabytes.

AlotOfReading 2 hours ago | parent [-]

No, a FPE algorithm is a cryptographic construct that uses an existing block cipher (e.g. AES-256) to construct a cryptographically secure permutation of the input without length extension. That is, input size = output size, for all sizes. Ideally, if input size >= block size of the underlying cipher, the resulting permutation is no weaker than just using the cipher directly.

You could use FPE for multi-megabyte permutations, but I don't know why you would.