| ▲ | Show HN: Graphmind – Persistent Memory and Graph for Claude Code (MCP, CLI, GUI)(github.com) | |
| 2 points by aouicher 8 hours ago | 1 comments | ||
Hello everyone, My name is Alex. I’ve been a CTO for many years and am passionate about development. Unsurprisingly, and like many others, AI has changed a lot of things and made processes much simpler and faster. However, this complexity is sometimes accompanied by new challenges that are unique to AI. As we know, AI—and more specifically LLMs—have no memory; they can’t store information within their context and may perform the same search 50 times. Many solutions are now available to address this shortcoming, and that’s great. But I also quickly realized that beyond the memory issue, the problem surrounding context is a real challenge. Ask Claude Code to quickly and easily understand a 10-year-old monolith, and you’ll see what I mean... The infamous LLM hallucinations will pop up fast, and the endless repetition of grep commands or similar to read through all the code will become a never-ending task... Not to mention the token consumption. So I dug into the problem to come up with a solution. After weeks of development, I’m pretty pleased with myself and would like to share my Graphmind project with you. Graphmind allows you to understand a project’s deep structure, not just its surface. Specifically, it builds a graph of all functions, classes, and the calls between them (i.e., AST parsing, not grep-style regex). In addition, it semantically indexes each symbol using embeddings. So when Claude searches for “wallet creation,” he doesn’t run a grep; he queries the graph and finds `create_wallet_for_user` or `provisionWallet` even if the exact name isn’t in the query. And beyond the graph, I added a persistent memory layer as well as an MCP server to easily interact with it. Result: This isn’t a silver bullet for all AI-related problems, and I don’t claim to have reinvented anything. But this is what I was missing to really let Claude Code work on serious codebases without it getting lost after 20 minutes or consuming all the context through blind reading. I wanted to measure concretely what difference this makes, so I took one of my repositories that’s around 31k symbols. A `grep “payment processing”` search returns 5.9 MB of noise (comments, tests, strings, variable names)—about 1.4 million tokens for Claude to process. The same query via GraphMind returns 1 KB of structured data, sorted by relevance, with callers and callees already attached. 257 tokens. That’s 5,700 times less. Over a typical session with 5 to 10 searches, that’s about 10 million tokens saved—an entire context window. Now, let me reassure you: I’m not hallucinating. This is truly the observed result. If you’re working with Claude Code or Cursor on a large codebase, with lots of different projects, and you’re struggling with the same issues, I’d love to hear your thoughts. The repo is public, and you can find it here: https://github.com/aouicher/graphmind with pleasure to discuss about that. | ||
| ▲ | cschnatz 4 hours ago | parent [-] | |
[flagged] | ||