Remix.run Logo
MontyCarloHall 11 hours ago

Since EFS is just an NFS mount, I wonder if you could do this yourself by attaching an NVMe volume to your instance and setting up something like cachefilesd on the NFS mount, pointed to the NVMe.

Would

   mkfs.ext4 /dev/nvme0n1 && \
   mount /dev/nvme0n1 /var/cache/fscache && \
   mount -t s3files -o fsc fs-0aa860d05df9afdfe:/ /home/ec2-user/s3files
work out of the box? It does for EFS. It hardly seems worth it to offer a managed service that's effectively three shell commands, but this is AWS we're talking about.
jitl 9 hours ago | parent [-]

AWS's [docs on EFS performance](https://docs.aws.amazon.com/efs/latest/ug/performance-tips.h...) say:

> Don't use the following mount options:

> - fsc – This option enables local file caching, but does not change NFS cache coherency, and does not reduce latencies.

If the S3 Files sync logic ran client-side, we could almost entirely avoid file access latency for cached files and paying for new expensive EFS disks. I already pay for a lot of NVMe disks, let me just use those!

MontyCarloHall 8 hours ago | parent [-]

>This option enables local file caching, but does not change NFS cache coherency, and does not reduce latencies.

That's true for any NFS setup, not just EFS. The benefit of local NFS caching is to speed up reads of large, immutable files, where latency is relatively negligible. I'm not sure why AWS specifically dissuades users from enabling caching, since it's not like bandwidth to an EFS volume is even in the ballpark of EBS/NVMe bandwidth.