Remix.run Logo
Does Postgres Scale?(dbos.dev)
63 points by KraftyOne 5 hours ago | 30 comments
daneel_w 3 hours ago | parent | next [-]

"Overall, we find a Postgres server can handle up to 144K of these writes per second. That’s a lot, equivalent to 12 billion writes per day."

Based on a problem I'm facing with Postgres today, I wonder if this really progresses as linearly as the article wants to make it out.

We're in the middle of evaluating Postgres as a replacement for MySQL, and experience notable slow-down for plain multi-row inserts due to index growth as soon as the table reaches just a couple of dozen million rows. It's an uncomplicated and flat (no constraints or foreign keys etc.) medium width table of about 10-15 columns and a handful of non-composite btree indices - and/or hash indices; we've tried mixing and matching just to see what happens - but ingestion drops to less than half already before 50m rows. At 100m rows the insertion performance is down to a fraction and from there it just gets worse the larger the table and its indices grow. It's as if there's some specific exponential cut-off point where everything goes awry. However, if we simply remove all indices from the table, Postgres will happily insert hundreds of millions rows at a steady and near identical pace from start to end. The exact same table and indices on MySQL, as closely as we can match between MySQL and Postgres, running on the same OS and hardware, maintains more or less linear insertion performance well beyond 500m rows.

Now, there's a lot to say about the whys and why-nots when it comes to keeping tables of this size in an RDBMS and application design relying on it to work out, and probably a fair amount more about tuning Postgres' config, but we're stumped as to why PG's indexing performance falters this early when contrasted against InnoDB/MySQL. 50-100m rows really isn't much. Would greatly appreciate if anyone with insight could shed some light on it and maybe offer a few ideas to test out.

(add.: during these stress tests the hardware is nowhere close to over-encumbered, and there's consistent headroom on both memory, CPU and disk I/O)

bijowo1676 27 minutes ago | parent | next [-]

problem is table design and write amplification. Every row insert triggers update into every index, so you get classic amplification problem.

Separate your table into Cold (with all indexes and bells and whistles) and Hot (heap table with no indexes except PK).

Insert as many rows as you want into Hot heap, and then move them in the background into cold in batches, so that index recalculation is amortized across many rows, instead of per-row.

Another poster suggested partitioning, thats the same idea: separate Hot and Cold data into partitions and keep hot partition as heap

giovannibonetti 3 hours ago | parent | prev | next [-]

With some extra admin work, you can greatly increase your insert throughput, as long as the table load is comprised mostly of inserts: 1. Partition your table by range of a monotonic ID or timestamp. Notice the primary key will have to contain this column. A BIGINT id column should work fine; 2. Remove all the other indexes from the partitioned table. Add them to all the partitions, except the latest one. This way, the latest one can endure a tough write load, while the other ones work fine for reads; 3. Create an admin routine (perhaps with pg_cron) to create a new partition whenever the newest one is getting close to the limit. When the load moves to the newer partition, add indexes concurrently to the old one; 4. You'll notice the newest partition will the optimized for writes but not reads. You can offset some of that by replacing BTREE secondary indexes with BRIN [1], particularly the one with bloom operator (not to be confused with Postgres Bloom regular indexes [2]). BRIN is a family of indexes more optimized for writes than reads. If the partition is not too large, it shouldn't be too bad to read from it. 5. Later you can merge partitions to avoid having too many of them. Postgres has commands for that, but I think they lock the whole table, so a safer bet is to copy small partitions into a new larger one and swap them manually.

[1] https://www.postgresql.org/docs/current/brin.html [2] https://www.postgresql.org/docs/current/bloom.html

subhobroto 2 hours ago | parent [-]

These are good suggestions but I'm apprehensive they might come back and say they have 64 GB (or less) of RAM or they are using PostgreSQL RDS on AWS or something.

I asked them for specifics.

keithnz an hour ago | parent [-]

I don't think it really matters in terms of their question though, given MySql on the same specs doesn't have the problem and postgres does. Quite clearly it has something to do with indexes and what is the wall postgres is running into that causes the drop off on quite low amounts of rows. If the answer is just get more RAM, it kind of implies postgres is not really that scalable. Especially if the drop off is proportional to the number of rows.

andersmurphy 2 hours ago | parent | prev | next [-]

The problem is row locks when using interactive transactions over the network and contention. That can absolutely kill your performance with postgres, there's not really anything you can do to get around it (other than avoid interactive transactions). [1]

[1] - https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio...

rconti 38 minutes ago | parent [-]

We had an interesting architecture situation at work. Puppet Enterprise uses a single Postgres server. The company had moved from a recommendation of using a single PuppetDB API node (which fell over at high load) to running a PuppetDB API server on each compiler node.

That, however, came with its own set of problems. Of course you have to tune for concurrent connections as you scale wider, but there were much more serious contention issues than you'd expect, and the compilation times were terrible too. It turned out to be because those transactions locked the DB during their (synchronous) operations, and we had a globally distributed set of compilers in order to serve globally distributed traffic.

The solution ended up being to run a separate cluster of API servers in the same region as the DB. The expensive calls from the compilers to the API servers were largely async https so they didn't have to wait on the API nodes, and the API nodes could talk to the DB synchronously with low latency.

justinclift an hour ago | parent | prev | next [-]

What's the underlying filesystem(s) you're using for the data storage?

subhobroto 3 hours ago | parent | prev [-]

You've given us some idea of the volume of your data but there's no mention of what's ingesting it or how.

> during these stress tests the hardware is nowhere close to over-encumbered, and there's consistent headroom on both memory, CPU and disk I/O

This assertion is likely wrong - you're likely skipping over some metrics that has clues to what we need to know. Here are some questions to get the discussion moving.

- Is this PostgreSQL managed or self-hosted?

Your mention of "consistent headroom on both memory, CPU and disk I/O" gives me hope you're self-hosting it but I've heard the same thing in the past from people attempting to use RDS and wondering the same as you are, so no assumptions.

- Are you using COPY or multi-row INSERT statements?

- How much RAM does that server have?

- What is the fillfactor, max_wal_size and checkpoint_timeout?

- Is the WAL on NVMe?

- What's the iostat or wa during the slowdown?

- Are random UUIDs (part of) the index?

Have you posted to https://dba.stackexchange.com/

If I were you, I would create a GitHub repo that has scripts that synthesize the data and reproduce the issues you're seeing.

jghn 3 hours ago | parent | prev | next [-]

It scales beyond the needs that most people have in most situations.

The constant problem is that "big scale" always means "larger than I've seen", so on any project larger than a person has encountered, they assume they need to pull out the big guns. Also, people worry about things like what happens if they really *do* scale 10 years from now.

Neither is a practical concern for nearly anyone who will ever face this decision.

And then yes, of course, some people have problems that actually can't be solved by Postgres. But verify this first, don't assume.

switchbak 2 hours ago | parent [-]

What gets me is that some people seem to ignore the very real cliff of complexity that ramps up the moment you move to eventual consistency. If you need it you need it, but you have to bake in those assumptions everywhere - and they commonly break the default assumptions of those who don't have a bunch of experience with it or haven't architected their approach to work around those.

And in many cases it's those architectures that force more complexity and make it appear like they have much bigger challenges then they do. Great for resume driven development, but often you can get away with far less.

CubsFan1060 2 hours ago | parent | prev | next [-]

I thought this was a fun article from a couple months back: https://openai.com/index/scaling-postgresql/

oa335 2 hours ago | parent | prev | next [-]

They can adjust their checkpoint settings to increase throughput further - https://www.postgresql.org/docs/current/wal-configuration.ht...

KraftyOne 2 hours ago | parent [-]

Yes, this benchmark deliberately uses RDS defaults to make the comparison fairer/more general.

One warning--the setting that would increase throughput the most (synchronous_commit = off) sacrifices durability to do so.

q3k 3 hours ago | parent | prev | next [-]

Yes, you can scale it quite well vertically.

But how about horizontally? It would be nice to have high availability, or even to be able to upgrade the OS and postgres itself without downtime.

levkk 16 minutes ago | parent | next [-]

Shameless plug[0].

[0] https://pgdog.dev

tuvix 3 hours ago | parent | prev | next [-]

Only played around with it but you can use patroni, etcd and HAproxy to achieve this. It’s a pain, but I believe there was some kind of coolify-style open source application to do this for you but I can’t for the life of me remember its name

jrnkntl 3 hours ago | parent | next [-]

autobase[1] is the one I can think of

[1] https://github.com/autobase-tech/autobase

subhobroto 3 hours ago | parent | prev [-]

You might be thinking of Pigsty?

Atleast I hope you are! Nothing else has been as well battletested. Unfortunately, perhaps because if its name, it gets no facetime on HN. Its last few mentions here barely received attention it deserved.

levl289 3 hours ago | parent | prev | next [-]

Yep, this is what I think about when “scaling” is mentioned. Maybe I’m too distributed-compute brained, but throwing CPU at a db isn’t what I was hoping would be the answer.

_3u10 3 hours ago | parent [-]

So the point of distributed compute is to reduce the compute needed? I’ve generally found that distributed compute requires more compute than vertical scaling while getting clobbered by network bandwidth / latency.

Theoretically with 2 to 10x compute required and in practice 100 to 500x

literalAardvark 3 hours ago | parent [-]

The point of distributed computing is to do computing that you can't do on a vertically scaled system or to increase availability.

If you're doing it for other reasons it's usually a mistake.

raddan 11 minutes ago | parent [-]

The advice I’ve gotten is that you want to move computation to data that is already distributed. The cost of moving large amounts of data usually dwarfs compute costs (usually, not always), and so the performance win comes from distributing the computation and then (depending on the problem) centralizing aggregate results.

literalAardvark 3 hours ago | parent | prev [-]

Practically trivial to do in 2026 even by hand, and there are a couple of ready to use solutions that even make it automated.

If you're running it in kubernetes with cloudnativepg it's even easier.

The only thing it doesn't do well is master master replication which is why most of these does it scale posts mostly talk about how slow writes are. And they are pretty slow.

cachius an hour ago | parent | prev | next [-]

And Does Postrgres Backup scale?

subhobroto 4 hours ago | parent | prev | next [-]

DBOS is amazing when it comes to Durable Workflows. There are others in the space - the most popular one being Temporal but I argue, Temporal is also the most complicated one. I often say Temporal is like Kubernetes while DBOS is like `docker compose`. (and for those taking me literally, you can use DBOS in Kubernetes!)

I don't realize why DBOS is not nearly as popular as Temporal but it has made a world of difference building Durable Queues and Long Running, Durable Workflows in Python (it supports other languages too).

As they show in this article, Postgres scales impressively well (4 billion workflows per day, on a db.m7i.24xlarge, enough for most applications), which is why, if you have your PostgreSQL backup/restore strategy knocked out and dialed in, you should really take a close look at DBOS to handle your cloud agnostic or self hosted Durable Queues and Durable Workflows. It's an amazing piece of software founded by the original author of Ingres (precusor to Postgres - the story of DBOS itself is captivating. I believe it started from being unable to scale Spark job scheduling)

lelandbatey 2 hours ago | parent [-]

The reason that DBOS isn't as popular is because it's younger. DBOS launched in the form we know it in 2024. Temporal is much older; Temporal is technically a fork of Cadence and Cadence released originally in 2017, with Temporal forking and releasing back in 2020. When all three are trying to be "the same sort of thing" and that thing is new, it's hard to show up 7-8 years after the trailblazers and say "oh yeah, we're clearly better" when the existing thing works and is used by tons of folks.

cyberpunk an hour ago | parent [-]

Temporal is a dumpster fire, they've gotten so much VC funding (recently had D, 300M at a 5bn valuation) with ... nothing to build except ways to trap customers into their SAAS.

I give them about a year or two before the wheels fall off, then it's off to Broadcom and friends.

But I could be wrong as now they're not in the 'durable execution' space at all, it's 'durable execution for ai' according to their latest conference.

Got to spend that VC dosh somewhere I suppose, they're certainly not spending it on making a good product.

tomwheeler 12 minutes ago | parent [-]

Temporal employee here. I'm very surprised by your comment.

It's true that we recently had a Series D and that VC firms recognize the value of what we do. The Temporal Server software is 100% open source (MIT license: https://github.com/temporalio/temporal/blob/main/LICENSE). It's totally free and you don't even need to fill out a registration form, just download precompiled binaries from GitHub or clone the repo and build it yourself. You can self-host it anywhere you like, no restrictions on scale or commercial usage. We offer SaaS (Temporal Cloud), which customers can choose as an alternative self-hosting, based on their needs. The migration path is bi-directional, so not a trap by any definition.

Regarding AI, Temporal is widely used in that space, but that does not negate the thousands of other companies that use Temporal for other things (e.g., order management systems, customer onboarding, loan origination, money movement, cloud infrastructure management, and so on). In fact, our growth in the AI market came about because companies who were already using Temporal for other use cases realized that it also solved the problems they encountered in their AI projects.

And to your last point, we've made dozens of enhancements to the product (here's a small sample: https://temporal.io/blog/categories/product-news). I'd encourage you to follow the news from next week's Replay conference (https://replay.temporal.io/) because we'll be announcing many more.

JasonHEIN 4 hours ago | parent | prev [-]

when discussing DB it becomes so so interesting not because db itself but the people trying to ask some infeasible questions