▲ | k9294 7 hours ago | |||||||||||||
> For postgres, the bottleneck was the CPU on the postgres side. It consistently maxed out the 2 cores dedicated to it, while also using ~5000MiB of RAM. Comparing throttled pg vs non-throttled redis is not a benchmark. Of course when pg is throttled you will see bad results and high latencies. A correct performance benchmark would be to give all components unlimited resources and measure performance and how much they use without saturation. In this case, PG might use 3-4 CPUs and 8GB of RAM but have comparable latencies and throughput, which is the main idea behind the notion “pg for everything”. In a real-world situation, when I see a problem with saturated CPU, I add one more CPU. For a service with 10k req/sec, it’s most likely a negligible price. | ||||||||||||||
▲ | Timshel 7 hours ago | parent [-] | |||||||||||||
Since it's in the context of a homelab you usually don't change your hardware for one application, using the same resources in both test seems logical (could argue that the test should be pg vs redis + pg). And their point is that it's good enough as is. | ||||||||||||||
|