Remix.run Logo
jstrong 8 hours ago

can't believe postgres still uses a process-per-connection model that leads to endless problems like this one.

IsTom 4 hours ago | parent [-]

You can't process significantly many more queries than you've got CPU cores at the same time anyway.

hu3 8 minutes ago | parent | next [-]

I disagree. If that was the case, pgBouncer wouldn't need to exist.

The problem of resource usage for many connections is real.

namibj 4 hours ago | parent | prev [-]

Much of the time in a transaction can reasonably be non-db-cpu time, be it io wait or be it client CPU processing between queries. Note I'm not talking about transactions that run >10 seconds, just ones with the queries themselves technically quite cheap. At 10% db-CPU-usage, you get a 1 second transaction from just 100ms of CPU.

vbezhenar 6 minutes ago | parent | next [-]

In a properly optimized database absolute majority of queries will hit indices and most data will be in memory cache, so majority of transactions will be CPU or RAM bound. So increasing number of concurrent transactions will reduce throughput. There will be few transactions waiting for I/O, but if majority of transactions are waiting for I/O, it's either horrifically inefficient database or very non-standard usage.

IsTom 2 hours ago | parent | prev [-]

That many long-running transactions seem like a pretty unusual workload to me and potentially running into isolation issues. I can see running a few of these, but not a lot, especially at the same time?