Remix.run Logo
IanCal 7 hours ago

I disagree. They found that Postgres, without tuning, was easily fast enough on low level hardware and would come with the benefit of not deploying another service. Additionally tuning it isn’t really relevant.

If the defaults are fine for a use case then unless I want to tune it for personal interest it’s either a poor use of my fun time or a poor use of my clients funds.

perrygeo an hour ago | parent | next [-]

The default shared memory is 128MiB, not even 1% of typical machines today. A benchmark run with these settings is effectively crippling your hardware by making sure 99% of your available memory is ignored by postgres. It's an invalid benchmark, unless redis is similarly crippled.

lemagedurage 6 hours ago | parent | prev [-]

"If we don't need performance, we don't need caches" feels like a great broader takeaway here.

indymike 2 hours ago | parent | next [-]

Sometimes, a cache is all about reducing expense: I.e, free cache query vs expensive API query.

IanCal 3 hours ago | parent | prev | next [-]

A cache being fast enough doesn’t mean no caching is relevant - I’m not sure why you’d equate the two.

motorest 5 hours ago | parent | prev | next [-]

> "If we don't need performance, we don't need caches" feels like a great broader takeaway here.

I don't think this holds true. Caches are used for reasons other than performance. For example, caches are used in some scenarios for stampede protection to mitigate DoS attacks.

Also, the impact of caches on performance is sometimes negative. With distributed caching, each match and put require a network request. Even when those calls don't leave a data center, they do cost far more than just reading a variable from memory. I already had the displeasure of stumbling upon a few scenarios where cache was prescribed in a cargo cult way and without any data backing up the assertion, and when we took a look at traces it was evident that the bottleneck was actually the cache itself.

ralegh 4 hours ago | parent [-]

DoS is a performance problem, if your server was infinitely fast with infinite storage they wouldnt be an issue.

lomase 2 hours ago | parent [-]

If my gandma had wheels it would be a car.

hobs 3 hours ago | parent | prev [-]

I see people downvoting this. Anyone who disagrees with this, we have YAGNI for a reason - if someone said to me my performance was fine and they added caches, I would look at them with a big hairy eyeball because we already know cache invalidation is a PITA, that correctness issues are easy to create, and now you have the performance of two different systems to manage.

Amazon actually moved away from caches for some parts of its system because consistent behavior is a feature, because what happens if your cache has problems and the interaction between that and your normal thing is slow? What if your cache has some bugs or edge case behavior? If you don't need it you are just doing a bunch of extra work to make sure things are in sync.