Remix.run Logo
cyberpunk 4 days ago

This isn’t any kind of answer it’s a bunch of non-statements..

How is this any different that rds on nvme disks?

With a name like planet scale i assumed it would be some multi-master setup?

benjiro 3 days ago | parent | next [-]

Planetscale used to run postgres on AWS with network attached storage. So every time the DB hits the disks > it goes over the network. Need to read 4kb > network, another 4kb > network. So your latency is instead of microsecond on local storage, its miliseconds. Where a local NVME can do ... 100k 4k reads, over the network storage it does maybe 1k (just a example).

The problem is, there are not a lot of solutions to scale postgres beyond a single server. So if your DB grows to 100TB ... you have a issue as AWS does not provide a 100TB local NVME solution, only network storage.

Here comes Niki or whatever they named it. Their own alternative to Vitess (see Mysql), what is a solution that allows Mysql to scale horizontally from 1 to 1000's of servers, each with their own local storage.

So Planetscale made their own solution, so they can horizontal scale dozens, hundreds of AWS VPS with their own local storage, to give you those 100, 200, 500TB of storage space, without the need for network based storage.

There are other solutions like CockroachDB, YukubyteDB, TiDB that also allow for horizontal scaling but non are 100% postgres (and especially extensions) compatible.

Side node: The guy that wrote Vitess for Mysql, is also working on multigress (https://multigres.com/), a solution that does the same. Aka Vitess for postgres.

So yea, hope this helps a bit to explain it. If your not into dealing with DB scaling, the way they wrote it is really not helpful.

parthdesai 3 days ago | parent [-]

> Side node: The guy that wrote Vitess for Mysql, is also working on multigress (https://multigres.com/), a solution that does the same. Aka Vitess for postgres.

And also was the founder of planetscale

whizzter 4 days ago | parent | prev | next [-]

sharded setup with a bit fast and loose foreign key management, so very good for performance but not a drop-in replacement if you rely on your foreign keys to be constrained/checked by the database.

sgarland 3 days ago | parent [-]

So perfect for most web dev companies, then.

“We handle FKs in the app for flexibility.”

“And how many orphaned rows do you have?”

“…”

rcrowley 3 days ago | parent [-]

The question isn't how many orphaned rows do you have, it's whether it matters. Databases are wonderful but they cannot maintain every invariant and they cannot express a whole application. They're one tool in the belt.

sgarland 3 days ago | parent | next [-]

> cannot express a whole application

Not with that attitude: https://docs.postgrest.org/en/v13/index.html

Orphaned rows can very much matter for data privacy concerns, which is also where I most frequently see this approach failing.

jashmatthews 3 days ago | parent | prev [-]

Most companies can afford not to give a shit until they hit SOC2 or GDPR compliance and then suddenly orphaned data is a giant liability.

rcrowley 3 days ago | parent | prev [-]

The short answer is that RDS doesn't run on local NVMe disks, it runs on EBS.