Remix.run Logo
k9294 7 hours ago

> For postgres, the bottleneck was the CPU on the postgres side. It consistently maxed out the 2 cores dedicated to it, while also using ~5000MiB of RAM.

Comparing throttled pg vs non-throttled redis is not a benchmark.

Of course when pg is throttled you will see bad results and high latencies.

A correct performance benchmark would be to give all components unlimited resources and measure performance and how much they use without saturation. In this case, PG might use 3-4 CPUs and 8GB of RAM but have comparable latencies and throughput, which is the main idea behind the notion “pg for everything”.

In a real-world situation, when I see a problem with saturated CPU, I add one more CPU. For a service with 10k req/sec, it’s most likely a negligible price.

Timshel 7 hours ago | parent [-]

Since it's in the context of a homelab you usually don't change your hardware for one application, using the same resources in both test seems logical (could argue that the test should be pg vs redis + pg).

And their point is that it's good enough as is.

m000 6 hours ago | parent | next [-]

It's a homelab. If it works, it works. And we already knew that it would work without reading TFA. No new insights whatsoever. So what's the point of sharing or discussing?

k9294 6 hours ago | parent | prev [-]

In a home lab you can go the other way around and compare the number of requests before saturation.

e.g. 4k/sec saturates PG CPU to 95%, you get only 20% on redis at this point. Now you can compare latencies and throughput per $.

In the article PG latencies are misleading.