| ▲ | "Human in the loop" sounds hopeful but more likely is a grace period(bravenewteams.substack.com) | ||||||||||||||||||||||
| 4 points by zauberberg 7 hours ago | 5 comments | |||||||||||||||||||||||
| ▲ | nis0s 7 hours ago | parent | next [-] | ||||||||||||||||||||||
The problem is this: how do you make sure the outcomes you want are accurate, precise, consistent and reliable? Or if the results are indeed aligned with the outcomes you want. If I cannot guarantee outcomes (people usually get demoted or fired when they don’t exactly do what the boss says), then I need something in the loop which I can hold accountable to ensure I get the outcomes I want. If an agent keeps giving me bad information or code, how do I demote or fire that agent if I am subjected to vendor lock-in? Everyone is using the same models and APIs across the board. We need people for quality assurance even beyond AI developing cognitive flexibility on par with humans. I can hold a person, or group of people, accountable for their mistakes. I cannot hold AI agents or APIs accountable for shit. Let’s also not forget that some mistakes have very little fault tolerance, and software we write has never been able meet that demand. See all the recent plane crashes, for example. If human capabilities are the bottleneck, then those capabilities need to augmented. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | 7 hours ago | parent | prev [-] | ||||||||||||||||||||||
| [deleted] | |||||||||||||||||||||||