▲ | wejick 2 days ago | |||||||||||||||||||||||||
Enjoyed the article. How do you compare to ccache as this is my go to cache library. Well the need is mostly on high traffic endpoints, so LRU it's. | ||||||||||||||||||||||||||
▲ | maypok86 a day ago | parent | next [-] | |||||||||||||||||||||||||
I benchmarked ccache for throughput [1], memory consumption [2], and hit rate [3]. For hit rate simulations, I used golang-lru's LRU implementation, though I doubt a correct LRU implementation would show meaningful hit rate differences. Note that otter's simulator results were repeatedly compared against both W-TinyLFU's (Caffeine) and S3-FIFO's (Libcachesim) simulators, showing nearly identical results with differences within hundredths of a percent. [1]: https://maypok86.github.io/otter/performance/throughput/ [2]: https://maypok86.github.io/otter/performance/memory-consumpt... [3]: https://maypok86.github.io/otter/performance/hit-ratio/ | ||||||||||||||||||||||||||
▲ | latch a day ago | parent | prev [-] | |||||||||||||||||||||||||
Author of ccache here. I've barely touched Go in over a decade, but if I did, I'd probably still use ccache if I didn't need cutting edge (because I think the API is simple), but not if I needed something at huge scale. When I wrote ccache, there were two specific features that we wanted that weren't readily available: - Javing both a key and a subkey, so that you can delete either by key or key+subkey (what ccache calls LayeredCache). - Having items cached that other parts of the system also have a long-living reference to, so there's not much point in evicting them (what ccache calls Tracking and is just a separate ARC mechanism that overrides the eviction logic). It also supports caching based on arbitrary item size (rather than just a count of items), but I don't remember if that was common back then. I've always thought that this, and a few other smaller features, make it a little bloated. Each cached item carries a lot of information (1). I'm surprised that, in the linked benchmark, the memory usage isn't embarrassing. I'm not sure that having a singl goroutine do a lot of the heavy-lifting, to minimize locks, is a great idea. It has a lot of drawbacks, and if I was to start over again, I'd really want to benchmark it to see if it's worth it (I suspect that, under heavy write loads, it might perform worse). The one feature that I do like, that I think most LRU's should implement, is to have a [configurable] # of gets before an item is promoted. This not only reduces the need for locking, it also adds some frequency bias to evictions. Fun Fact: My goto interview question was to implement a cache. It was always rewarding to see people make the leap from using a single data structure (a dictionary) to using two (dictionary + linked list) to achieve a goal. It's not a way most of us are trained to think of data structures, which I think is a shame. (1) https://github.com/karlseguin/ccache/blob/master/item.go#L22 | ||||||||||||||||||||||||||
|