| ▲ | daxfohl 5 hours ago | |||||||||||||
I don't even think that "singularity-level coding agents" get us there. A big part of engineering is working with PMs, working with management, working across teams, working with users, to help distill their disparate wants and needs down into a coherent and usable system. Knowing when to push back, when to trim down a requirement, when to replace a requirement with something slightly different, when to expand a requirement because you're aware of multiple distinct use cases to which it could apply, or even a new requirement that's interesting enough that it might warrant updating your "vision" for the product itself: that's the real engineering work that even a "singularity-level coding agent" alone could not replace. An AI agent almost universally says "yes" to everything. They have to! If OpenAI starts selling tools that refuse to do what you tell them, who would ever buy them? And maybe that's the fundamental distinction. Something that says "yes" to everything isn't a partner, it's a tool, and a tool can't replace a partner by itself. | ||||||||||||||
| ▲ | Twey 5 hours ago | parent [-] | |||||||||||||
I think that's exactly an example of climbing the abstraction ladder. An agent that's incapable of reframing the current context, given a bad task, will try its best to complete it. An agent capable of generalizing to an overarching goal can figure out when the current objective is at odds with the more important goal. You're correct in that these aren't really ‘coding agents’ any more, though. Any more than software developers are! | ||||||||||||||
| ||||||||||||||