| ▲ | TheOtherHobbes 11 hours ago | ||||||||||||||||||||||||||||
Not necessarily. It depends if the process is deterministic and repeatable. If an AI generates a process more quickly than a human, and the process can be run deterministically, and the outputs are testable, then the process can run without direct human supervision after initial testing - which is how most automated processes work. The testing should happen anyway, so any speed increase in process generation is a productivity gain. Human monitoring only matters if the AI is continually improvising new solutions to dynamic problems and the solutions are significantly wrong/unreliable. Which is a management/analysis problem, and no different in principle to managing a team. The key difference in practice is that you can hire and fire people on a team, you can intervene to change goals and culture, and you can rearrange roles. With an agentic workflow you can change the prompts, use different models, and redesign the flow. But your choices are more constrained. | |||||||||||||||||||||||||||||
| ▲ | lkjdsklf 10 hours ago | parent [-] | ||||||||||||||||||||||||||||
The issue is LLMs are, by design, non-deterministic. That means that, with the current technology, there can never be a deterministic agent. Now obviously, humans aren't deterministic either, but the error bars are a lot closer together than they are with LLMs these days. An easy to point at example is the coding agent that removed someones home directory that was circulating around. I'm not saying a human has never done that, but it's far less likely because it's so far out of the realm of normal operations. So as of today, we need humans in the loop. And this is understood by the people making these products. That's why they have all these permissions and prompts for you to accept/run commands and all of that. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||