▲ | tgsovlerkhgsel 2 days ago | ||||||||||||||||
Following CT (without relying on a third party service) right now is a scale problem, and increasing scale by at least another order of magnitude will make it worse. I was trying to process CT logs locally. I gave up when I realized that I'd be looking at over a week even if I optimized my software to the point that it could process the data at 1 Gbps (and the logs were providing the data at that rate), and that was a while ago. With the current issuance rate, it's barely feasible to locally scan the CT logs with a lot of patience if you have a 1 Gbps line. https://letsencrypt.org/2025/08/14/rfc-6962-logs-eol states "The current storage size of a CT log shard is between 7 and 10 terabytes". So that's a day at 1 Gbps for one single log shard of one operator, ignoring overhead. | |||||||||||||||||
▲ | integralid 2 days ago | parent [-] | ||||||||||||||||
> even if I optimized my software to the point that it could process the data at 1 Gbps Are you sure you did the math correctly? We're scanning CT at my work, and we do have scale problems, but the bottleneck is for database inserts. From your link, looks like a shard is 10TB and that's for a year of data Still insane amount and a scale problem, of course | |||||||||||||||||
|