Remix.run Logo
ritzaco 4 days ago

This seems to be mainly aimed at existing PlanetScale customers.

> To create a Postgres database, sign up or log in to your PlanetScale account, create a new database, and select Postgres.

It does mention the sign up option but doesn't really give me much context about pricing or what it is. I know a bit, but I get confused by different database offerings, so it seems like a missed opportunity to give me two more sentences of context and some basic pricing - what's the easiest way for me to try this if I'm curious?

On the pricing page I can start selecting regions and moving slides to create a plan from $39/month and up, but I couldn't easily find an answer to if there's a free trial or cheaper way to 'give it a spin' without committing.

intelekshual 4 days ago | parent | next [-]

PlanetScale (famously?) deprecated their free "Hobby" tier (plus fired their sales & marketing teams) back in 2024 to achieve profitability

https://planetscale.com/blog/planetscale-forever

rimprobablyly 4 days ago | parent [-]

> famously?

Notoriously

diordiderot 3 days ago | parent [-]

Could you explain?

rimprobablyly 3 days ago | parent [-]

Look up the definition of famous and notorious. What needs explaining?

dangoodmanUT 4 days ago | parent | prev [-]

PlanetScale isnt' really designed for the "ill give it a go" casual customer that might use supabase

It's designed for businesses that need to haul ass

game_the0ry 4 days ago | parent | next [-]

I am not experienced enough to know the performance differences between planetscale and supabase, but...

> It's designed for businesses that need to haul ass

Could you elaborate what you meant by this for my education?

samlambert 4 days ago | parent | next [-]

Performance differences between PlanetScale and Supabase: https://planetscale.com/benchmarks/supabase

ndriscoll 4 days ago | parent [-]

> Businesses that need to haul ass

> Benchmarks are done on a dual-core VM with "unlimited" IOPS

I'd be interested in a comparison with a pair of Beelink SER5 Pros ($300 each) in master-slave config.

parthdesai 3 days ago | parent [-]

> > Benchmarks are done on a dual-core VM with "unlimited" IOPS

Unlimited is a feature here, no need to be snarky. They famously went against the accepted practice of separating storage from compute, and as a result, you reduce latency by an order of magnitude and get unlimited IOPS.

ndriscoll 3 days ago | parent [-]

You do not get unlimited IOPS with any technology, but you especially do not get it in AWS, where the machines seem to be? Writing "unlimited" is completely unserious. If it's 67k read/33k write at 4k qd32 or something just say so. Or if you're actually getting full bandwidth to a disk with a 2 core VM (doubt), say 1.5M or whatever.

mattrobenolt 3 days ago | parent [-]

Unlimited in this context just means you're going to be CPU limited before you hit limits on IOPS. It'd be technically not possible to be bottlenecked on IOPS.

That might not be 100% true, but I've never seen a RDBMS be able to saturate IOPS on a local NVMe. It's some quite specialized software to leverage every ounce of IOPS without being CPU bottlenecked first. Postgres and MySQL are not it.

ndriscoll 3 days ago | parent [-]

What does "local NVMe" mean for you? AFAIK in AWS if you have a 2 core VM you're getting ~3% of a single disk worth of IOPS for their attached storage. Technically NVMe. Not generally what people think when a laptop can do 50x more IO. The minipc I mentioned has 4x the core count and... well who knows how much more IO capacity, but it seems like it should be able to trounce both. Obviously an even more interesting comparison would be... a real server. Why is a database company running benchmarks on something comparable to my low-end phone?

Anyway, saying unlimited is absurd. If you think it's more than you need, say how much it is and say that's more than you need. If you have infinite IOPS why not do the benchmark on a dataset that fits in CPU cache?

mattrobenolt 3 days ago | parent | next [-]

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-inst...

Not all AWS instance types support NVMe drives. It's not the same as normal attached storage.

I'm not really sure your arguments are in good faith here tho.

This is just not a configuration you can trivially do while maintaining durability and HA.

There's a lot of hype going the exact opposite direction and more separation of storage and compute. This is our counter to that. We think even EBS is bad.

This isn't a setup that is naturally just going to beat a "real server" that also has local NVMes or whatever you'd do yourself. This is just not what things like RDS or Aurora do, etc. Most things rely on EBS which is significantly worse than local storages. We aren't really claiming we've invented something new here. It's just unique in the managed database space.

ndriscoll 3 days ago | parent [-]

Right, but as far as I know, the only instances that give you full bandwidth with NVMe drives are metal. Everything else gives you a proportional fraction based on VM size. So for most developers, yes it is hard to saturate an NVMe drive with e.g. 8-16 cores. Now how about the 100+ cores you actually need to rent to get that full bandwidth?

I agree that EBS and the defaults RDS tries to push you into are awful for a database in any case. 3k IOPS or something absurd like that. But that's kind of the point: AWS sells that as "SSD" storage. Sure it's SSD, but it's also 100-1000x slower than the SSDs most devs would think of. Their local "NVMe" is AFAIK also way slower than what it's meant to evoke in your mind unless you're getting the largest instances.

Actually, showing scaling behavior with large instances might make Planetscale look even better than competitors in AWS if you can scale further vertically before needing to go horizontal.

mattrobenolt 3 days ago | parent [-]

Right, but I think you're kinda missing a lot of the tangible benefits here. This IMO is just reinforcing the idea of "unlimited" IOPS. You can't physically use the totality of IOPS available on the drives.

Even if you can't saturate them, even with low CPU cores, latency is drastically better which is highly important for database performance.

Having low latency is tangibly more important than throughput or number of IOPS once your dataset is larger than RAM no matter how many CPU cores you have.

Chasing down p95s and above really shine with NVMes purely from having whatever order of magnitude less latency.

Less latency also equates to less iowait time. All of this just leads to better CPU time utilization on your database.

ndriscoll 3 days ago | parent [-]

How does "AWS limits IOPS. 'NVMe' drives are not as fast as the drives you're used to unless you rent the biggest possible servers" reinforce "unlimited" IOPS?

Yes there are benefits like lower latency, which is often measured in terms of qd1 IOPS.

parthdesai 3 days ago | parent | prev [-]

They literally state this on their metal offering

> Unlimited I/O — Metal's local NVMe drives offer higher I/O bandwidth than network-attached storage. You will run out of CPU long before you use all your I/O bandwidth.

https://planetscale.com/metal#benefits-of-metal

maxenglander 4 days ago | parent | prev [-]

In addition to the point about performance Sam made, PlanetScale's Vitess (MySQL) offers out-of-the-box horizontal scalability, which means we can maintain extremely good performance as your dataset and QPS grows to a massive scale: https://planetscale.com/case-studies/cash-app. We will be bringing the same capability to Postgres later on.

Our uptime and reliability is also higher than what you might find elsewhere. It's not uncommon for companies paying lots of money to operate elsewhere to migrate to PlanetScale for that reason.

We're a serious database for serious businesses. If a business can't afford to spend $39/mo to try PlanetScale, they may be happier operating elsewhere until their business grows to a point where they are running into scaling and performance limits and can afford (or badly need, depending on the severity of those limits) to try us out.

ritzaco 4 days ago | parent | prev [-]

businesses that 'need to haul ass' usually still want to try something out before buying it. That doesn't need to a a free plan, but it's common to offer some trial period to new users.

Also totally OK if planetscale doesn't do this and that $39/month _is_ the best way to try them out, I just think it would be good for them to make explicit in the article what I should do if I think I might want it but want to try it.

rcrowley 4 days ago | parent | next [-]

All our list prices are monthly and our bills are actually even finer-grained - there's no commitment to pay for a database longer than you run it.

If you do decide to operate on PlanetScale long-term, check out <https://planetscale.com/pricing> for consumption commitment discounting and other options that might make sense for your company.

dangoodmanUT 4 days ago | parent | prev | next [-]

They try it by contacting sales and setting up a pilot, not a self-service free trial

stronglikedan 4 days ago | parent | prev [-]

> but it's common to offer some trial period to new users

That is rather uncommon for B2B.