▲ | w-m 10 hours ago | ||||||||||||||||||||||||||||||||||
Interesting tool, congrats on the launch! I was wondering: have you thought about automation bias or automation complacency [0]? Sticking with the drop-tables example: if you have an agent that works quite well, the human in the loop will nearly always approve the task. The human will then learn over time that the agent "can be trusted", and will stop reviewing the pings carefully. Hitting the "approve" button will become somewhat automated by the human, and the risky tasks won't be caught by the human anymore. | |||||||||||||||||||||||||||||||||||
▲ | dhorthy 9 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
this is fascinating and resonates with me on a deep level. I'm surprised I haven't stumbled across this yet. I think we have this problem with all AI systems, e.g. I have let cursor write wrong code from time to time and don't review it at the level I should...we need to solve that for every area of AI. Not a new problem but definitely about to get way more serious | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | j45 7 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
Premature optimization, and premature automation cause a lot of issues, and overlooking a lot of insight. By just doing something manually 10-100 times, and collecting feedback, both understanding of the problem, possible solutions/specifications can evolve orders of magnitude better. | |||||||||||||||||||||||||||||||||||
|