Remix.run Logo
andersmurphy 3 hours ago

The problem is row locks when using interactive transactions over the network and contention. That can absolutely kill your performance with postgres, there's not really anything you can do to get around it (other than avoid interactive transactions). [1]

[1] - https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio...

rconti 2 hours ago | parent [-]

We had an interesting architecture situation at work. Puppet Enterprise uses a single Postgres server. The company had moved from a recommendation of using a single PuppetDB API node (which fell over at high load) to running a PuppetDB API server on each compiler node.

That, however, came with its own set of problems. Of course you have to tune for concurrent connections as you scale wider, but there were much more serious contention issues than you'd expect, and the compilation times were terrible too. It turned out to be because those transactions locked the DB during their (synchronous) operations, and we had a globally distributed set of compilers in order to serve globally distributed traffic.

The solution ended up being to run a separate cluster of API servers in the same region as the DB. The expensive calls from the compilers to the API servers were largely async https so they didn't have to wait on the API nodes, and the API nodes could talk to the DB synchronously with low latency.