Remix.run Logo
regecks 2 days ago

We’re looking for a distributed Go cache.

We don’t want to round trip to a network endpoint in the ideal path, but we run multiple instances of our monolith and we want a shared cache tier for efficiency.

Any architecture/library recommendations?

maypok86 2 days ago | parent | next [-]

To be honest, I'm not sure I can recommend anything specific here.

1. How much data do you have and how many entries? If you have lots of data with very small records, you might need an off-heap based cache solution. The only ready-made implementation I know is Olric [1].

2. If you can use an on-heap cache, you might want to look at groupcache [2]. It's not "blazingly-fast", but it's battle-tested. Potential drawbacks include LRU eviction and lack of generics (meaning extra GC pressure from using `interface{}` for keys/values). It's also barely maintained, though you can find active forks on GitHub.

3. You could implement your own solution, though I doubt you'd want to go that route. Architecturally, segcache [3] looks interesting.

[1]: https://github.com/olric-data/olric

[2]: https://github.com/golang/groupcache

[3]: https://www.usenix.org/conference/nsdi21/presentation/yang-j...

dpifke a day ago | parent [-]

Otter can be used as the backing store with groupcache-go, which is a fork of the original groupcache: https://github.com/groupcache/groupcache-go#pluggable-intern...

awenix 2 days ago | parent | prev | next [-]

groupcache[https://github.com/golang/groupcache] has been around for some time now.

HALtheWise a day ago | parent | next [-]

The original groupcache is basically unmaintained, but there's at least two forks that have carried on active development and support additional nice features (like eviction), and should probably be preferred for most projects.

https://github.com/groupcache/groupcache-go

mrweasel a day ago | parent | prev | next [-]

I'm insanely fascinated by Groupcache. It's such a cool idea.

pstuart 2 days ago | parent | prev [-]

It's very limited in scope, but if it solves your needs it would be the way to go.

sally_glance 2 days ago | parent | prev | next [-]

Hm, without more details on the use case and assuming no "round trip to a network" means everything is running on a single host I see a couple of options:

1) Shared memory - use a cache/key-value lib which allows you to swap the backend to some shmem implementation

2) File-system based - managing concurrent writes is the challenge here, maybe best to use something battle tested (sqlite was mentioned in a sibling)

3) Local sockets - not strictly "no network", but at least no inter-node communication. Start valkey/redis and talk to it via local socket?

Would be interested in the actual use case though, if the monolith is written in anything even slightly modern the language/runtime should give you primitives to parallelize over cores without worrying about something like this at all... And when it comes to horizontal scaling with multiple nodes there is no avoiding networking anyway.

nchmy 2 days ago | parent | prev | next [-]

perhaps a NATS server colocated on each monolith server (or even embedded in your app, if it is written in golang, meaning that all communication is in-process) and use NATS KV?

Or if you just want it all to be in-memory, perhaps use some other non-distributed caching library and do the replication via NATS? Im sure there's lots of gotchas with something like that, but Marmot is an example of doing SQLite replication via NATS Jetstream

edit: actually, you can set jetstream/kv to be in-memory rather than file persistence. So, it could do the job of olric or rolling your own distributed kv via nats. https://docs.nats.io/nats-concepts/jetstream/streams#storage...

stackskipton 2 days ago | parent | prev | next [-]

Since you mention no network endpoint, I assume it's on a single server. If so, have you considered SQLite? Assuming your cache is not massive, the file is likely to end up in Filesystem cache so most of reads will come from memory and writes on modern SSD will be fine as well.

It's easy to understand system with well battle tested library and getting rid of cache is easy, delete the file.

EDIT: I will say for most use cases, the database cache is probably plenty. Don't add power until you really need it.

mbreese 2 days ago | parent | prev | next [-]

Could you add a bit more to the “distributed cache” concept without a “network endpoint”? Would this mean running multiple processes of the same binary with a shared memory cache on a single system?

If so, that’s not how I’d normally think of a distributed cache. When I think of a distributed cache, I’m thinking of multiple instances, likely (but not necessarily) running on multiple nodes. So, I’m having a bit of a disconnect…

paulddraper 2 days ago | parent | prev | next [-]

LRU in memory backed by shared Elasticache.

remram 2 days ago | parent | prev [-]

It can't be shared without networking so I am not sure what you mean. Are you sure you need it to be shared?