▲ | charcircuit 2 days ago | |||||||||||||||||||||||||
>would increase a lot You can delete old logs or come up with a way to download the same thing with less disk space. Even if the current architecture does not scale we can always change it. >even harder to follow CT. It should be no harder to follow than before. | ||||||||||||||||||||||||||
▲ | tgsovlerkhgsel 2 days ago | parent | next [-] | |||||||||||||||||||||||||
Following CT (without relying on a third party service) right now is a scale problem, and increasing scale by at least another order of magnitude will make it worse. I was trying to process CT logs locally. I gave up when I realized that I'd be looking at over a week even if I optimized my software to the point that it could process the data at 1 Gbps (and the logs were providing the data at that rate), and that was a while ago. With the current issuance rate, it's barely feasible to locally scan the CT logs with a lot of patience if you have a 1 Gbps line. https://letsencrypt.org/2025/08/14/rfc-6962-logs-eol states "The current storage size of a CT log shard is between 7 and 10 terabytes". So that's a day at 1 Gbps for one single log shard of one operator, ignoring overhead. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | lokar 2 days ago | parent | prev [-] | |||||||||||||||||||||||||
You could extend the format to account for repetition of otherwise identical short ttl certs |