▲ | mattrobenolt 3 days ago | ||||||||||||||||
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-inst... Not all AWS instance types support NVMe drives. It's not the same as normal attached storage. I'm not really sure your arguments are in good faith here tho. This is just not a configuration you can trivially do while maintaining durability and HA. There's a lot of hype going the exact opposite direction and more separation of storage and compute. This is our counter to that. We think even EBS is bad. This isn't a setup that is naturally just going to beat a "real server" that also has local NVMes or whatever you'd do yourself. This is just not what things like RDS or Aurora do, etc. Most things rely on EBS which is significantly worse than local storages. We aren't really claiming we've invented something new here. It's just unique in the managed database space. | |||||||||||||||||
▲ | ndriscoll 3 days ago | parent [-] | ||||||||||||||||
Right, but as far as I know, the only instances that give you full bandwidth with NVMe drives are metal. Everything else gives you a proportional fraction based on VM size. So for most developers, yes it is hard to saturate an NVMe drive with e.g. 8-16 cores. Now how about the 100+ cores you actually need to rent to get that full bandwidth? I agree that EBS and the defaults RDS tries to push you into are awful for a database in any case. 3k IOPS or something absurd like that. But that's kind of the point: AWS sells that as "SSD" storage. Sure it's SSD, but it's also 100-1000x slower than the SSDs most devs would think of. Their local "NVMe" is AFAIK also way slower than what it's meant to evoke in your mind unless you're getting the largest instances. Actually, showing scaling behavior with large instances might make Planetscale look even better than competitors in AWS if you can scale further vertically before needing to go horizontal. | |||||||||||||||||
|