| ▲ | Show HN: Agent-cache – Multi-tier LLM/tool/session caching for Valkey and Redis | ||||||||||||||||||||||||||||||||||
| 17 points by kaliades 2 days ago | 6 comments | |||||||||||||||||||||||||||||||||||
Multi-tier exact-match cache for AI agents backed by Valkey or Redis. LLM responses, tool results, and session state behind one connection. Framework adapters for LangChain, LangGraph, and Vercel AI SDK. OpenTelemetry and Prometheus built in. No modules required - works on vanilla Valkey 7+ and Redis 6.2+. Shipped v0.1.0 yesterday, v0.2.0 today with cluster mode. Streaming support coming next. Existing options locked you into one tier (LangChain = LLM only, LangGraph = state only) or one framework. This solves both. npm: https://www.npmjs.com/package/@betterdb/agent-cache Docs: https://docs.betterdb.com/packages/agent-cache.html Examples: https://valkeyforai.com/cookbooks/betterdb/ GitHub: https://github.com/BetterDB-inc/monitor/tree/master/packages... Happy to answer questions. | |||||||||||||||||||||||||||||||||||
| ▲ | potter098 a day ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
I’d be curious how you’re handling freshness for tool caches. Exact-match caching seems great for pure functions, but once a tool depends on external state I’d want a TTL or invalidation hook, otherwise the hit rate can look great while the answer is already stale. | |||||||||||||||||||||||||||||||||||
| ▲ | revenga99 2 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
Can you explain what this does? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||