▲ | bambax 8 hours ago | |||||||
The idea is great and necessary. It doesn't seem super hard to replicate but why would anyone build their own solution if something already exists and works fine. The thing that got me thinking... how do you make sure an LLM won't eventually hallucinate approval -- or outright lie about it, to get going? Anyway, congrats, this sounds really cool. | ||||||||
▲ | foota 7 hours ago | parent [-] | |||||||
At some point the real tool has to be called, at that point, you can do actual checks that do not rely on the AI output (e.g., store the text that the AI generated and check in code that there was an approval for that text). | ||||||||
|