▲ | carlhjerpe a day ago | ||||||||||||||||
Isn't this just repackaged RAG pretty much? | |||||||||||||||||
▲ | simonw a day ago | parent | next [-] | ||||||||||||||||
Depends which definition of RAG you're talking about. RAG was originally about adding extra information to the context so that an LLM could answer questions that needed that extra context. On that basis I guess you could call skills a form of RAG, but honestly at that point the entire field of "context engineering" can be classified as RAG too. Maybe RAG as a term is obsolete now, since it really just describes how we use LLMs in 2025. | |||||||||||||||||
| |||||||||||||||||
▲ | rco8786 a day ago | parent | prev | next [-] | ||||||||||||||||
Seems like that’s it? You give it a knowledge base of “skills” aka markdown files with contexts in them and Claude figures out when to pull them into context. | |||||||||||||||||
▲ | prophesi a day ago | parent | prev [-] | ||||||||||||||||
I think RAG is out of favor because models have a much larger context these days, so the loss of information density from vectorization isn't worth it, and doesn't fetch the information surrounding what's retrieved. | |||||||||||||||||
|