| ▲ | MontyCarloHall 13 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||
This is essentially S3FS using EFS (AWS's managed NFS service) as a cache layer for active data and small random accesses. Unfortunately, this also means that it comes with some of EFS's eye-watering pricing: — All writes cost $0.06/GB, since everything is first written to the EFS cache. For write-heavy applications, this could be a dealbreaker. — Reads hitting the cache get billed at $0.03/GB. Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free. — Cache is charged at $0.30/GB/month. Even though everything is written to the cache (for consistency purposes), it seems like it's only used for persistent storage of small files (<128kB), so this shouldn't cost too much. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | thomas_fa 6 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Thanks for the analysis. Interestingly when we first released our low latency s3-compatible storage (1M IOPS, p99 ~5ms)[1], a lot of people asking the same questions why we tried to bring file system semantics (atomic object/folder rename) to s3. We also got some feedback from people who really need FS sematics, and added POSIX FS support then. aws S3FS is using normal FUSE interface, which would be super heavy due to inherent overhead of copying data back and forth between user space and kernel space, that is the initial concern when we tried to add the POSIX support for the original object storage design. Fortunately, we have found and open-sourced a perfect solution [2]: using FUSE_OVER_IO_URING + FUSE_PASSTHROUGH, we can maintain the same high-performance archtecture design of our original object storage. We'd like to come out a new blog post explain more details and reveal our performance numbers if anyone is interested with this. [1] https://fractalbits.com/blog/why-we-built-another-object-sto... | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | deepsun 6 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> directly streamed from the underlying S3 bucket, which is free. No reads from S3 are free. All outgoing traffic from AWS is charged no matter what. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | ktimespi 8 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
This was my concern too. The whole point of using S3 as a file system instead of EBS / EFS (for me at least) is to minimize cost and I don't really see why I would use this instead of s3fs. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | the8472 12 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free. Always uncached? S3 has pretty bad latency. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||