| ▲ | raw_anon_1111 2 hours ago | |||||||
Telling a bunch of agents to do stuff is like treating it as a senior developer who you trust to take an ambiguous business requirement and letting them use their best judgment and them asking you if they have a question . But doing that with AI feels like hiring an outsourcing firm for a project and they come back with an unmaintable mess that’s hard to reason through 5 weeks later. I very much micro manage my AI agents and test and validate its output. I treat it like a mid level ticket taker code monkey. | ||||||||
| ▲ | bonesss an hour ago | parent | next [-] | |||||||
My experience with good outsourcing firms is that they come back with heavily-documented solutions that are 95% of what you actually wanted, leaving you uncomfortably wondering if doing it yourself woulda been better. I’m not fully sure what’s worse, something close to garbage with a short shelf life anyone can see, or something so close to usable that it can fully bite me in the ass… | ||||||||
| ▲ | strongpigeon an hour ago | parent | prev [-] | |||||||
I fully believe that if I didn’t review its output and ask it to clean it up it would become unmaintainable real quick. The trick I’ve found though is to be detailed enough in the design from both a technical and non-technical level, sometimes iterating a few time on it with the agent before telling it to go for it (which can easily take 30 minutes) That’s how I used to deal with L4, except codex codes much faster (but sometimes in the wrong direction) | ||||||||
| ||||||||