Remix.run Logo
bulbar 2 hours ago

This assumes failures can be detected and fixed more easily than generating the corresponding change. I am not convinced that's the case.

Counter points to my own arguments:

1. We don't know yet in detail what AI is good at.

2. AI doesn't need to be perfect, just "good enough", whatever that means for a specific project. More failures while saving hundreds of thousands dollars each year might be acceptable, for example.

throwaway041207 an hour ago | parent [-]

> 2. AI doesn't need to be perfect, just "good enough", whatever that means for a specific project. More failures while saving hundreds of thousands dollars each year might be acceptable, for example.

This I think is the unexplored aspect of what's happening right now. Guardrails around "good enough" systems is where the future value lies. In the future code will never be as good as when the artisans were writing it, but if you have an automated process to validate/verify mediocre code (and kick it back to AI for refinement when it fails) before it's fully productionized, then you have a pathway to scaling agentic coding.