| ▲ | cyanydeez 2 hours ago | |||||||||||||
I played around with a local LLM to try and build a wiki like DAG. It made a lot of stupid errors from vague generic things like interpreting based on file names to not following redirects and placing the redirect response in them. I've also had them convert to markdown something like an excel formatted document. It worked pretty well as long as I was examining the output. But the longer it ran in context, the more likely it was to try in slip things in that seemed related but wasn't part of the break down. The only way I've found to mitigate some of it is to make every file a small-purpose built doc. This way you can definitely use git to revert changes but also limit the damage every time they touch them to the small context. Anyone who thinks they're a genius creating docs or updating them isnt actually reading the output. | ||||||||||||||
| ▲ | sebastiennight an hour ago | parent [-] | |||||||||||||
> I've also had them convert to markdown something like an excel formatted document. This look like a task where the LLM would be best used in writing a deterministic script or program that then does the conversion. Trusting a LLM to make the change without tools is like telling the smartest person you know to just recite the converted document out loud from memory. At some point they'll get distracted, wrong, or unwittingly inject their own biases and ideas into it whenever the source data is counter-intuitive to them. | ||||||||||||||
| ||||||||||||||