▲ | maypok86 2 days ago | |
No, what do you want to verify? Any network calls make Redis significantly slower than an on-heap cache. I'd even argue these are tools for different purposes and don't compare well directly. A common pattern, for example, is using on-heap caches together with off-heap caches/dedicated cache servers (like Redis) in L1/L2/Lx model. Here, the on-heap cache serves as the service's first line of defense, protecting slower, larger caches/databases from excessive load. | ||
▲ | nchmy a day ago | parent [-] | |
Yes, I assume that otter etc are vastly faster, but I suspect there's people who are not aware of that and, consequently, don't have such a layered approach. So, the idea was to show how much faster to further promote adoption of such tools. And, to clarify, I was only thinking about localhost/Unix sockets to mostly eliminate network latency. Anything external to the server would obviously be incomparably slower. I also suppose that it would be perhaps even more interesting/useful to compare the speed of these in-memory caches to Golang caches/kv stores that have disk persistence, and perhaps even also things like sqlite. Obviously the type of "disk" would matter significantly, with nvme being closer in perf (and probably sufficient for most applications). Anyway, it was just a thought. |