Remix.run Logo
smacker 6 hours ago

I like using postgres for everything, it lets me simplify infrastructure. But using it as a cache is a bit concerning in terms of reliability, in my opinion.

I have witnessed many incidents when DB was considerably degrading. However, thanks to the cache in redis/memcache, a large part of the requests could still be processed with minimal increase in latency. If I were serving cache from the same DB instance, I guess, it would cause cache degradation too when there are any problems with the DB.

aiisthefiture 6 hours ago | parent | next [-]

Select by id is fast. If you’re using it as a cache and not doing select by id then it’s not a cache.

smacker 5 hours ago | parent [-]

absolutely. But when PG is running out of open connections or has already consumed all available CPU even the simplest query will struggle.

IsTom 32 minutes ago | parent | next [-]

You can have a separate connection pool for 'cache' requests. You shouldn't have too many PG connections open anyway, on the order of O(num of CPUs).

motorest 5 hours ago | parent | prev [-]

> But when PG is running out of open connections or has already consumed all available CPU even the simplest query will struggle.

I don't think it is reasonable to assume or even believe that connection exhaustion is an issue specific to Postgres. If you take the time to learn about the topic, you won't need to spend too much time before stumbling upon Redis and connection pool exhaustion issues.

motorest 5 hours ago | parent | prev [-]

> But using it as a cache is a bit concerning in terms of reliability, in my opinion.

This was the very first time I heard anyone even suggest that storing data in Postgres was a concern in terms of reliability, and I doubt you are the only person in the whole world who has access to critical insight onto the matter.

Is it possible that your prior beliefs are unsound and unsubstantiated?

> I have witnessed many incidents when DB was considerably degrading.

This vague anecdote is meaningless. Do you actually have any concrete scenario in mind? Because anyone can make any system "considerably degrading", even Redis, if they make enough mistakes.

baobun 5 hours ago | parent | next [-]

No need to be so combative. Take a chill pill, zoom out and look at the reliability of the entire system and its services rather than the db in isolation. If postgres has issues, it can affect the reliability of the service further if it's also running the cache.

Besides, having the cache on separate hardware can reduce the impact on the db on spikes, which can also factor into reliability.

Having more headroom for memory and CPU can mean that you never reach the load where ot turns to service degradation on the same hw.

Obviously a purpose-built tool can perform better for a specific use-case than the swiss army knife. Which is not to diss on the latter.

motorest 5 hours ago | parent [-]

> No need to be so combative.

You're confusing being "combative" with asking you to substantiate your extraordinary claims. You opted to make some outlandish and very broad sweeping statements, and when asked to provide any degree of substance, you resorted to talk about "chill pills"? What does that say about the substance if your claims?

> If postgres has issues, it can affect the reliability of the service further if it's also running the cache.

That assertion is meaningless, isn't it? I mean, isn't that the basis of any distributed systems analysis? That if a component has issues, it can affect the reliability of the whole system? Whether the component in question is Redis, Postgres, doesn't that always hold true?

> Besides, having the cache on separate hardware can reduce the impact on the db on spikes, which can also factor into reliability.

Again, isn't this assertion pointless? I mean, it holds true whether it's Postgres and Redis, doesn't it?

> Having more headroom for memory and CPU can mean that you never reach the load where ot turns to service degradation on the same hw.

Again, this claim is not specific to any specific service. It's meaningless to make this sort of claim to single out either Redis or Postgres.

> Obviously a purpose-built tool can perform better for a specific use-case than the swiss army knife. Which is not to diss on the latter.

Is it obvious, though? There is far more to life than synthetic benchmarks. In fact, the whole point of this sort of comparison is that for some scenarios a dedicated memory cache does not offer any tangible advantage over just using a vanilla RDBMS.

This reads as some naive auto enthusiasts claiming that a Formula 1 car is obviously better than a Volkswagen Golf because they read somewhere they go way faster, but in reality what they use the car for is to drive to the supermarket.

scns 3 hours ago | parent [-]

> You opted to make some outlandish and very broad sweeping statements, and when asked to provide any degree of substance, you resorted to talk about "chill pills"?

You are not answering to OP here. Maybe it's time for a little reflection?

didntcheck 2 hours ago | parent | prev | next [-]

> This was the very first time I heard anyone even suggest that storing data in Postgres was a concern in terms of reliability

You seem to be reading "reliability" as "durability", when I believe the parent post meant "availability" in this context

> Do you actually have any concrete scenario in mind? Because anyone can make any system "considerably degrading", even Redis

And even Postgres. It can also happen due to seemingly random events like unusual load or network issues. What do you find outlandish about the scenario of a database server being unavailable/degraded and the cache service not being?

abtinf 5 hours ago | parent | prev [-]

Inferring one meaning for “reliability” when the original post is obviously using a different meaning suggests LLM use.

This is a class of error a human is extremely unlikely to make.