Remix.run Logo
jamesblonde 6 days ago

Maybe AWS could start by making fast NVMes available - without requiring multi TB disks just to get 1 GB/s. S3FS experiments were run on 14 GB/s NVMe disks - an order of magnitude higher throughput than anything available in AWS today.

SSDs Have Become Ridiculously Fast, Except in the Cloud: https://news.ycombinator.com/item?id=39443679

kridsdale1 6 days ago | parent | next [-]

On my home LAN connected with 10gbps fiber between MacBook Pro and server, 10 feet away, I get about 1.5gbps vs the non-network speed of the disks of ~50 gbps. (Bits, not bytes)

I worked this out to the macOS SMB implementation really sucking. I set up a NFS driver and it got about twice as fast but it’s annoying to mount and use, and still far from the disk’s capabilities.

I’ve mostly resorted to abandoning the network (after large expense) and using Thunderbolt and physical transport of the drives.

dundarious 6 days ago | parent | next [-]

SMB/CIFS is an incredibly chatty, synchronous protocol. There are/were massive products built around mitigating and working around this when trying to use it over high latency satellite links (US military did/does this).

greenavocado 6 days ago | parent | prev [-]

Is NFS out of the question?

kridsdale1 6 days ago | parent [-]

I have set it up but it’s not easy to get drivers working on a Mac.

insaneirish 5 days ago | parent [-]

What particular drivers are you referring to? NFS is natively supported in MacOS...

olavgg 5 days ago | parent [-]

That is true, though the implementation is weird.

I mount my nfs shares like this: sudo mount -t nfs -o nolocks -o resvport 192.168.1.1:/tank/data /mnt/data

-o nolocks Disables file locking on the mounted share. Useful if the NFS server or client does not support locking, or if there are issues with lock daemons. On macOS, this is often necessary because lockd can be flaky.

-o resvport Tells the NFS client to use a reserved port (<1024) for the connection. Some NFS servers (like some Linux configurations or *BSDs with stricter security) only accept requests from clients using reserved ports (for authentication purposes).

__turbobrew__ 6 days ago | parent | prev [-]

There are i4i instances in AWS which can get you a lot of IOPS with a smaller disk.

jamesblonde 5 days ago | parent | next [-]

Had a look - Baseline disk throughput is 78.12 MB/s. Max throughput (30 mins/day) is 1250 MB/s.

NVMe i bought for 150 dollars with 4 TBs capacity gives me 6000 MB/s sustained

https://docs.aws.amazon.com/ec2/latest/instancetypes/so.html

sgarland 5 days ago | parent | next [-]

That’s on the smallest instance. I’m sure there’s a reason they offer it, but I can’t think of why. On the largest instance (which IME is what people use with these), it’s 5000 MBps. The newer i7ie max out at 7500 MBps.

__turbobrew__ 5 days ago | parent | prev [-]

You are incorrect, the numbers you are quoted is EBS volume performance. iX instances have directly attached NVME volumes which are separate from EBS.

> NVMe i bought for 150 dollars

Sure, now cost out the rest of the server, the racks, the colocation space for racks, power, multiple AZ redundancy, a clos network fabric, network peering, the spare hardware for failures, off site backups, supply chain management, a team of engineers to design the system, a team of staff to physically rack new hardware and unrack it, a team of engineers to manage the network, on call rotations for all those teams.

Sure the NVME is just $150 bro.

jamesblonde 3 days ago | parent [-]

You claim I am incorrect, but you don't provide a reference or numbers, which I couldn't find.

__turbobrew__ 3 days ago | parent [-]

AWS doesn't provide throughput numbers for the NVME on iX instances. You have to look at benchmarks or test it out yourself. Similar to packets per second limits which are not published either and can only be inferred through benchmarks.

ashu1461 5 days ago | parent | prev [-]

Are these attached directly to your server or hosted separately ?

huntaub 5 days ago | parent [-]

i-series instances have direct-attached drives