Remix.run Logo
ec109685 2 days ago

If you make models fast enough, you can onboard that expert developer instantly and let them reason their way to a solution, especially when giving access to a RAG to.

Over time, I models will add more memory and institutional knowledge capture rather than starting from a blank slate each time.

airstrike 2 days ago | parent [-]

I thought of that as I wrote my comment, but I think the infrastructure and glue to make that possible in a consistent, fast and scalable way is still a few years out.

lucasacosta_ 2 days ago | parent [-]

Definitely. For now the "frontier-level" papers (working with repository-level coding maintenance) need to necessarily depend on previously (and statically) generated Code Knowledge Graphs or Snippet-Retrieval systems, which makes the scalable and fast aspects complicated, as any change in the code would represent a change in the graph, hence requiring a rebuild. But given the context limit, you need to rely on Graph queries to give relevant parts and then at the end of the day it just reads snippets instead of the full code, which makes the consistent an issue, as it can't learn from the entirety of the code.

Papers I'm referring to (just some as example, as there're more):

- CodexGraph [https://arxiv.org/abs/2408.03910] - Graph

- Agentless [https://arxiv.org/abs/2407.01489] - Snippet-Retrieval

airstrike a day ago | parent [-]

Thanks for these links. I really appreciate it.