Remix.run Logo
tgsovlerkhgsel 2 days ago

Shortening the certificate lifespan to e.g. 24h would have a number of downsides:

Certificate volume in Certificate Transparency would increase a lot, adding load to the logs and making it even harder to follow CT.

Issues with domain validation would turn into an outage after 24h rather than when the cert expires, which could be a benefit in some cases (invalidating old certs quickly if a domain changes owner or is recovered after a compromise/hijack).

OCSP is simpler and has fewer dependencies than issuance (no need to do multi-perspective domain validation and the interaction with CT), so keeping it highly available should be easier than keeping issuance highly available.

With stapling (which would have been required for privacy) often poorly implemented and rarely deployed and browsers not requiring OCSP, this was a sensible decision.

tptacek 2 days ago | parent | next [-]

Well, OCSP is dead, so the real argument is over how low certificate lifetimes will be, not whether or not we might make a go of OCSP.

charcircuit 2 days ago | parent | prev [-]

>would increase a lot

You can delete old logs or come up with a way to download the same thing with less disk space. Even if the current architecture does not scale we can always change it.

>even harder to follow CT.

It should be no harder to follow than before.

tgsovlerkhgsel 2 days ago | parent | next [-]

Following CT (without relying on a third party service) right now is a scale problem, and increasing scale by at least another order of magnitude will make it worse.

I was trying to process CT logs locally. I gave up when I realized that I'd be looking at over a week even if I optimized my software to the point that it could process the data at 1 Gbps (and the logs were providing the data at that rate), and that was a while ago.

With the current issuance rate, it's barely feasible to locally scan the CT logs with a lot of patience if you have a 1 Gbps line.

https://letsencrypt.org/2025/08/14/rfc-6962-logs-eol states "The current storage size of a CT log shard is between 7 and 10 terabytes". So that's a day at 1 Gbps for one single log shard of one operator, ignoring overhead.

integralid 2 days ago | parent [-]

> even if I optimized my software to the point that it could process the data at 1 Gbps

Are you sure you did the math correctly? We're scanning CT at my work, and we do have scale problems, but the bottleneck is for database inserts. From your link, looks like a shard is 10TB and that's for a year of data

Still insane amount and a scale problem, of course

tgsovlerkhgsel a day ago | parent [-]

Well, 10 TB divided by 1 Gbps is ~22 hours, and there are multiple log providers with many shards (my scan was including data from certificates that had expired at that time).

It would still be feasible to build a local database and keep it updated (with way less than 1 Gbps), but initial ingestion would be weeks at 1 Gbps, and I'd need the storage for it.

For most hobbyists not looking to spend a fortune on rented servers/cloud, it's out of reach already.

charcircuit a day ago | parent [-]

Not all use cases need every single log. For example you may just want to have a log of certificates issued for domains that your company owns.

lokar 2 days ago | parent | prev [-]

You could extend the format to account for repetition of otherwise identical short ttl certs