Remix.run Logo
Building blobd: single-machine object store with sub-ms reads and 15 GB/s upload(blog.wilsonl.in)
59 points by charlieirish 2 days ago | 15 comments
rockwotj a day ago | parent | next [-]

> Direct I/O means no more fsync: no more complexity via background flushes and optimal scheduling of syncs. There's no kernel overhead from copying and coalescing. It essentially provides the performance, control, and simplicity of issuing raw 1:1 I/O requests.

Not true, you still need fsync in direct I/O to ensure durability in power loss situations. Some drives have write caches that means acknowledged writes live in non-volatile memory. So maybe the perf is wildly better because you’re sacrificing durability?

rrauch a day ago | parent | next [-]

Looks like the author is well aware:

  /// Even when using direct I/O, `fsync` is still necessary, as it ensures the device itself has flushed any internal caches.
  async fn sync(&self) {
    let (fut, fut_ctl) = SignalFuture::new();
    self.sender.send(Request::Sync { res: fut_ctl }).unwrap();
    fut.await
  }
Full code here:

https://github.com/wilsonzlin/blobd/blob/master/libblobd-dir...

actionfromafar a day ago | parent | prev [-]

You mean in volatile memory?

rockwotj a day ago | parent [-]

yes thanks

amluto a day ago | parent | prev | next [-]

That’s a lot of work creating a whole system that stores data on a raw block device. It would be nice to see this compared to… a filesystem. XFS, ZFS and btrfs are pretty popular.

bionsystem a day ago | parent [-]

I don't quite understand the point, why would anybody use S3 then ?

Scaevolus a day ago | parent | prev | next [-]

Similar systems include Facebook's Haystack and its open source equivalent, SeaweedFS.

tuhgdetzhh 18 hours ago | parent | prev | next [-]

When you have a service and really care about shoving of S3 latencies in the millisecond range, then you propably have enough users that all the tiny images are cached @ edge anyways.

bob1029 a day ago | parent | prev | next [-]

> Despite serving from same-region datacenters 2 ms from the user, S3 would take 30-200 ms to respond to each request.

200ms seems fairly reasonable to me once we factor in all of the other aspects of S3. A lot of machines would have to die at Amazon for your data to become at risk.

grenran a day ago | parent | prev | next [-]

S3's whole selling point is 11 9s of durability across the whole region which is probably why it's slow to begin with.

stackskipton a day ago | parent | prev [-]

Interesting project but lack of S3 protocol compatibility and fact it seems to YOLO your data means it's not acceptable for many.

moi2388 a day ago | parent [-]

And means it is acceptable for many others. There is a whole world outside of s3 you know.

Unroasted6154 a day ago | parent [-]

It's a bit weird to present it as an alternative to S3 when it looks like a persistent cache or k/v store. A benchmark against Redis would have been nice for example. The benchmark for rocks DB is also questionable as the performance depends a lot on how you configure it, and the article's claim that it doesn't support range read doesn't give me confidence in the results.

Also for the descried issue of small images for a frontend, nobody would serve directly from S3 without a caching layer on top.

It's a interesting read for fun, but I am not sure what it solves in the end.

supriyo-biswas a day ago | parent | next [-]

I'd have to assume it's a blob store for their search engine (or similar) project: https://blog.wilsonl.in/search-engine/

moi2388 21 hours ago | parent | prev [-]

Yes, those are fair points.