| ▲ | zauberberg 13 hours ago | |
You’re raising the key issue: it’s not whether AI can produce an answer, it’s whether an organisation can rely on it, and who is accountable when it fails. A few points I mostly agree with, with one nuance: Humans are in the loop today because accountability is clear. You can coach, discipline, replace, or escalate a person. You can’t meaningfully “hold an API responsible” in the same way. But companies don’t always solve reliability with a person reviewing everything. Over time they often shift to process-based controls: stronger testing, monitoring, audits, fallback procedures, and contractual guarantees. That’s how they already manage critical dependencies they also can’t “fire” overnight (cloud services, core software vendors, etc.). Vendor lock-in is real—but it’s also a choice firms can mitigate. Multi-vendor options, portability clauses, and keeping an exit path in the architecture are basically the equivalent of being able to replace a bad supplier. High fault-tolerance domains will keep humans involved longer. The likely change is not “no humans,” but fewer humans overseeing more automated work, with people focused on exceptions, risk ownership, and sign-off in the most sensitive areas. So yes: we need humans where the downside is serious and someone has to own the risk. My claim is just that as reliability and controls improve, organisations will try to shrink the amount of human review, because that review starts to look like the expensive part of the system. | ||
| ▲ | nis0s 11 hours ago | parent | next [-] | |
> The likely change is not “no humans,” but fewer humans overseeing more automated work, with people focused on exceptions, risk ownership, and sign-off in the most sensitive areas. The problem being that AI can scale faster, and compound problems exponentially, than any human can review or oversee things. There need to be better control mechanisms. | ||
| ▲ | 10 hours ago | parent | prev [-] | |
| [deleted] | ||