Remix.run Logo
hvb2 8 hours ago

You would need to get to insane read counts pretty much 24/7 for this to work out.

For HA redis you need at least 6 instances, 2 regions * 3 AZs. And you're paying for all of that 24/7.

And if you truly have 24/7 use then just 2 regions won't make sense as the latency to get to those regions from the other side of the globe easily removes any caching benefit.

odie5533 6 hours ago | parent | next [-]

It's $9/mo for 100 MB of ElastiCache Serverless which is HA.

It's $15/mo for 2x cache.t4g.micro nodes for ElastiCache Valkey with multi-az HA and a 1-year commitment. This gives you about 400 MB.

It very much depends on your use case though if you need multiple regions then I think DynamoDB might be better.

I prefer Redis over DynamoDB usually because it's a widely supported standard.

motorest 4 hours ago | parent [-]

> It's $9/mo for 100 MB of ElastiCache Serverless which is HA.

You need to be more specific with your scenario. Having to cache 100MB of anything is hardly a scenario that involves introducing a memory cache service such as Redis. This is well within the territory of just storing data in a dictionary. Whatever is driving the requirement for Redis in your scenario, performance and memory clearly isn't it.

ahoka 7 hours ago | parent | prev [-]

A 6 node cache and caching in DynamoDB, what the hell happened to the industry? Or people just call every kind of non business-object persistence cache now?

hvb2 5 hours ago | parent [-]

I don't understand your comment.

If you're given the requirement of highly available, how do you not end up with at least 3 nodes? I wouldn't consider a single region to be HA but I could see that argument as being paranoid.

A cache is just a store for things that expire after a while that take load of your persistent store. It's inherently eventually consistent and supposed to help you scale reads. Whatever you use for storage is irrelevant to the concept of offloading reads