| ▲ | aledevv 4 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> All of these features are about breaking the coupling between a human sitting at a terminal or chat window and interacting turn-by-turn with the agent. This means: - less and less "man-in-the-loop" - less and less interaction between LLMs and humans - more and more automation - more and more decision-making autonomy for agents - more and more risk (i.e., LLMs' responsibility) - less and less human responsibility Problem: Tasks that require continuous iteration and shared decision-making with humans have two possible options: - either they stall until human input - or they decide autonomously at our risk Unfortunately, automation comes at a cost: RISK. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | dist-epoch 4 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
AI driven cars have better risk profiles than humans. Why do you think the same will not also be true for AI steerers/managers/CEO? In a year of two, having a human in the loop, will all of their biases and inconsistencies will be considered risky and irresponsible. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||