Remix.run Logo
MontyCarloHall 13 hours ago

This is essentially S3FS using EFS (AWS's managed NFS service) as a cache layer for active data and small random accesses. Unfortunately, this also means that it comes with some of EFS's eye-watering pricing:

— All writes cost $0.06/GB, since everything is first written to the EFS cache. For write-heavy applications, this could be a dealbreaker.

— Reads hitting the cache get billed at $0.03/GB. Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free.

— Cache is charged at $0.30/GB/month. Even though everything is written to the cache (for consistency purposes), it seems like it's only used for persistent storage of small files (<128kB), so this shouldn't cost too much.

thomas_fa 6 hours ago | parent | next [-]

Thanks for the analysis. Interestingly when we first released our low latency s3-compatible storage (1M IOPS, p99 ~5ms)[1], a lot of people asking the same questions why we tried to bring file system semantics (atomic object/folder rename) to s3. We also got some feedback from people who really need FS sematics, and added POSIX FS support then.

aws S3FS is using normal FUSE interface, which would be super heavy due to inherent overhead of copying data back and forth between user space and kernel space, that is the initial concern when we tried to add the POSIX support for the original object storage design. Fortunately, we have found and open-sourced a perfect solution [2]: using FUSE_OVER_IO_URING + FUSE_PASSTHROUGH, we can maintain the same high-performance archtecture design of our original object storage. We'd like to come out a new blog post explain more details and reveal our performance numbers if anyone is interested with this.

[1] https://fractalbits.com/blog/why-we-built-another-object-sto...

[2] https://crates.io/crates/fractal-fuse

deepsun 6 hours ago | parent | prev | next [-]

> directly streamed from the underlying S3 bucket, which is free.

No reads from S3 are free. All outgoing traffic from AWS is charged no matter what.

simtel20 4 hours ago | parent [-]

Reads from s3 via an s3 endpoint inside a vpc to an interface inside of that vpc is not billed.

ktimespi 8 hours ago | parent | prev | next [-]

This was my concern too. The whole point of using S3 as a file system instead of EBS / EFS (for me at least) is to minimize cost and I don't really see why I would use this instead of s3fs.

avereveard 4 hours ago | parent [-]

Probably some tradeoff at high client count or if you seek into files to read partial data

the8472 12 hours ago | parent | prev [-]

> Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free.

Always uncached? S3 has pretty bad latency.

MontyCarloHall 11 hours ago | parent [-]

The threshold at which the cache gets used is configurable, with 128kB the default. The assumption is that any read larger than the threshold will be a long sustained read, for which latency doesn't matter too much. My question is, do reads <128kB (or whatever the threshold is) from files >128kB get saved to the cache, or is it only used for files whose overall size is under the threshold? Frequent random access to large files is a textbook use case for a caching layer like this, but its cost will be substantial in this system.

the8472 10 hours ago | parent [-]

NVMe read latency is in the 10-100µs range for 128kB blocks. S3 is about 100ms. That's 3-4 OOMs. The threshold where the total read duration starts to dominate latency would be somewhere in the dozens to hundreds of megabytes, not kilobytes.

9 hours ago | parent | next [-]
[deleted]
MontyCarloHall 10 hours ago | parent | prev | next [-]

I agree, it's an oddly low threshold. The latency differential of NFS vs. S3 is a couple OOMs, so a threshold of ~10MB seems more appropriate to me. Perhaps it's set intentionally low to avoid racking up immense EFS bills? Setting it higher would effectively mean getting billed $0.03/GB for a huge fraction of reads, which is untenable for most people's applications.

antonvs 10 hours ago | parent | prev [-]

< NVMe read latency is in the 10-100µs range for 128kB blocks. S3 is about 100ms. That's 3-4 OOMs.

Aren't you comparing local in-process latency to network latency? That's multiple OOM right there.

the8472 10 hours ago | parent [-]

No, within the same DC network latency does not add that much. After all EFS also manages 600µs average latency. It's really just S3 that's slow. I assume some large fraction of S3 is spread over HDDs, not SSDs.