▲ | Everyone's trying vectors and graphs for AI memory. We went back to SQL | |||||||
9 points by Arindam1729 10 hours ago | 6 comments | ||||||||
When we first started building with LLMs, the gap was obvious: they could reason well in the moment, but forgot everything as soon as the conversation moved on. You could tell an agent, “I don’t like coffee,” and three steps later it would suggest espresso again. It wasn’t broken logic, it was missing memory. Over the past few years, people have tried a bunch of ways to fix it: 1. Prompt stuffing / fine-tuning – Keep prepending history. Works for short chats, but tokens and cost explode fast. 2. Vector databases (RAG) – Store embeddings in Pinecone/Weaviate. Recall is semantic, but retrieval is noisy and loses structure. 3. Graph databases – Build entity-relationship graphs. Great for reasoning, but hard to scale and maintain. 4. Hybrid systems – Mix vectors, graphs, key-value, and relational DBs. Flexible but complex. And then there’s the twist: Relational databases! Yes, the tech that’s been running banks and social media for decades is looking like one of the most practical ways to give AI persistent memory. Instead of exotic stores, you can: - Keep short-term vs long-term memory in SQL tables - Store entities, rules, and preferences as structured records - Promote important facts into permanent memory - Use joins and indexes for retrieval This is the approach we’ve been working on at Gibson. We built an open-source project called Memori (https://memori.gibsonai.com/), a multi-agent memory engine that gives your AI agents human-like memory. It’s kind of ironic, after all the hype around vectors and graphs, one of the best answers to AI memory might be the tech we’ve trusted for 50+ years. I would love to know your thoughts about our approach! | ||||||||
▲ | 3rdSon_ 8 minutes ago | parent | next [-] | |||||||
This is a great approach. I will take a look at it. | ||||||||
▲ | mynti 7 hours ago | parent | prev | next [-] | |||||||
How does Memori choose what part of past conversations is relevant to the current conversation? Is there some maximum amount of memory it can feasibly handle before it will spam the context with irrelevant "memories"? | ||||||||
▲ | Xmd5a 5 hours ago | parent | prev | next [-] | |||||||
>It wasn’t broken logic, it was missing memory. sigh | ||||||||
▲ | thedevindevops 6 hours ago | parent | prev | next [-] | |||||||
How does what you've described solve the coffee/espresso problem? You can't query SQL such that records like 'espresso' return coffee? | ||||||||
| ||||||||
▲ | gangtao 10 hours ago | parent | prev [-] | |||||||
Who would've thought that 50 years of 'SELECT * FROM reality' might beat the latest semantic embedding wizardry? |