| ▲ | ssyhape 2 days ago | |||||||||||||||||||||||||
[flagged] | ||||||||||||||||||||||||||
| ▲ | 1st1 2 days ago | parent | next [-] | |||||||||||||||||||||||||
> My concern is scale though. Once you have thousands of nodes the Markdown files themselves become a mess to navigate The agent will update the graph. If you have thousands of nodes in md it means you have a highly non-trivial large code base and this is where lat will start saving you time - agents will navigate code much faster and you'll be reviewing semantic changes in lat in every diff, potentially suggesting the agents to alter the code or add more context to lat. You still have to be engaged in maintaining your codebase, just at a higher level. | ||||||||||||||||||||||||||
| ▲ | cyanydeez 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
We all know this isn't for humans. It's for LLMs. So better question is why there isn't a bootstrap to get your LLM to scaffold it out and assist in detailing it. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | ossianericson 2 days ago | parent | prev [-] | |||||||||||||||||||||||||
I would say that when you treat your Markdown as the authoritative source, I of course don't get it automated but that is my choice. It takes knowledge of the domain, but when you have deep specific knowledge that is worth so much more than automated updates. I use AI to get the initial MD but then I edit that. Sure it doesn't get auto updated, but I would never trust advice on the fly that got updated based on AI output on the internet. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||