Remix.run Logo
vasco 9 hours ago

It's not a paper or a journal but you could at least try to run a decent benchmark. As it is this serves no purpose other than reinforcing whatever point you started with. Didn't even tweak postgres buffers, literally what's the point.

dizzyVik 8 hours ago | parent [-]

I still end up recommending using postgres though, don't I?

pcthrowaway 8 hours ago | parent | next [-]

"I'll use postgres" was going to be your conclusion no matter what I guess?

I mean what if an actual benchmark showed Redis is 100X as fast as postgres for a certain use case? What are the constraints you might be operating with? What are the characteristics of your workload? What are your budgetary constraints?

Why not just write a blog post saying "Unoptimized postgres vs redis for the lazy, running virtualized with a bottleneck at the networking level"

I even think that blog post would be interesting, and might be useful to someone choosing a stack for a proof of concept. For someone who to scale to large production workloads (~10,000 requests/second or more), this isn't a very useful article, so the criticism is fair, and I'm not sure why you're dismissing it off hand.

motorest 5 hours ago | parent | next [-]

> "I'll use postgres" was going to be your conclusion no matter what I guess?

Would it bother you as well if the conclusion was rephrased as "based on my observations, I see no point in rearchitecting the system to improve the performance by this much"?

I think you are too tied to a template solution that not only you don't stop to think why you're using it or even if it is justified at all. Then, when you are faced with observations that challenge your unfounded beliefs, you somehow opt to get defensive? That's not right.

dizzyVik 8 hours ago | parent | prev [-]

I completely agree that this is not relevant for anyone running such workloads, the article is not aimed at them at all.

Within the constraints of my setup, postgres came out slower but still fast enough. I don't think I can quantify what fast enough is though. Is it 1000 req/s? Is it 200? It all depends on what you're doing with it. For many of my hobby projects which see tens of requests per second it definitely is fast enough.

You could argue that caching is indeed redundant in such cases, but some of those have quite a lot of data that takes a while to query.

vasco 8 hours ago | parent | prev [-]

That's the point, you put no effort and decided to do what you had decided already to do before.

dizzyVik 8 hours ago | parent [-]

I don't think this is a fair assessment. Had my benchmarks shown, say, that postgres crumbled under heavy write load then the conclusion would be different. That's exactly why I decided to do this - to see what the difference was.

m000 7 hours ago | parent [-]

Of course you didn't see postgres crumble. This still a toy example of a benchmark. Nobody starts (and even more pays for) a postgres instance to use exclusively as a cache. It is guaranteed that even in the simplest of deployments some other app (if not many of them) will be the main postgres tenant.

Add an app that actually uses postgres as a database, you will probably see its performance crumble, as the app will content the cache for resources.

Nobody asked for benchmarking as rigorous as you would have in a published paper. But toy examples are toy examples, be it in a publication or not.