| ▲ | great_psy 6 hours ago | |||||||
LLM Memeory (in general, any implementation) is good in theory. In practice, as it grows it gets just as messy as not having it. In the example you have on front page you say “continue working on my project”, but you’re rarely working on just one project, you might want to have 5 or 10 in memory, each one made sense to have at the time. So now you still have to say, “continue working on the sass project”, sure there’s some context around details, but you pay for it by filling up your llm context , and doing extra mcp calls | ||||||||
| ▲ | dennisy 6 hours ago | parent | next [-] | |||||||
True! But this is a very naive implementation, a proper implementation could surpass these challenges. | ||||||||
| ||||||||
| ▲ | vasco 5 hours ago | parent | prev [-] | |||||||
And once you're being specific about what it needs to remember you are 0 steps away from having just told AI to write and read files with the "memory" | ||||||||