| ▲ | pjc50 2 hours ago | |
Several things going on here: - concurrency is very hard - .. but object storage "solves" most of that for you, handing you a set of semantics which work reliably - single file throughput sucks hilariously badly - .. because 1Gb is ridiculously large for an atomic unit - (this whole thing resembles a project I did a decade ago for transactional consistency on TFAT on Flash, except that somehow managed faster commit times despite running on a 400Mhz MIPS CPU. Edit: maybe I should try to remember how that worked and write it up for HN) - therefore, all of the actual work is shifted to the broker. The broker is just periodically committing its state in case it crashes - it's not clear whether the broker ACKs requests before they're in durable storage? Is it possible to lose requests in flight anyway? - there's a great design for a message queue system between multiple nodes that aims for at least once delivery, and has existed for decades, while maintaining high throughput: SMTP. Actually, there's a whole bunch of message queue systems? | ||
| ▲ | candiddevmike an hour ago | parent [-] | |
AFAIK you can kinda "seek" reads in S3 using a range header, WCGW? =D | ||