| ▲ | kenforthewin 14 hours ago | ||||||||||||||||||||||||||||
This is just RAG. Yes, it's not using a vector database - but it's building an index file of semantic connections, it's constructing hierarchical semantic structures in the filesystem to aid retrieval .. this is RAG. On a sidenote, I've been building an AI powered knowledge base (yes, it uses RAG) that has wiki synthesis and similar ideas, take a look at https://github.com/kenforthewin/atomic | |||||||||||||||||||||||||||||
| ▲ | panarky 10 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
There's nothing about RAG that requires embeddings. The retrieval part can be grep if you don't care about semantic search. | |||||||||||||||||||||||||||||
| ▲ | Jet_Xu 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
I believe Multimodal KB+Agentic RAG is a suitable solution for personal KB. Imagine you have tons of office docs and want to dig some complex topics within it. You could try https://github.com/JetXu-LLM/DocMason Fully retrieve all diagram or charts info from ppt and excels, and then leverage Native AI agents(e.g. Codex) to conduct Agentic Rad | |||||||||||||||||||||||||||||
| ▲ | darkhanakh 14 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
eh i'd push back on "just RAG". like yes the retrieval-generation loop is RAG shaped, no ones arguing that. but the interesting bit here is the write loop - the LLM is authoring and maintaining the wiki itself, building backlinks, filing its own outputs back in. thats not retrieval thats knowledge synthesis. in vanilla RAG your corpus is static, here it isnt also the linting pass is doing something genuinely different - auditing inconsistencies, imputing missing data, suggesting connections. thats closer to assistant maintaining a zettelkasten than a search engine returning top-k chunks cool project btw will check it out | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | alfiedotwtf 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
You should have started your comment with “ I have a few qualms with this app”. I’ve been thinking something along the lines of a LLM-WIKI for a while now which could truely act as a wingman-executive-assistant-second-brain, but OP has gone deeper than my ADHD thoughts could have possibly gone. Looking forward to seeing this turn into fruition | |||||||||||||||||||||||||||||
| ▲ | locknitpicker 9 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
> This is just RAG. More to the point, this is how LLM assistants like GitHub Copilot use their custom instructions file, aka copilot-instructions.md https://docs.github.com/en/copilot/how-tos/configure-custom-... | |||||||||||||||||||||||||||||