▲ | ankit219 3 days ago | |
The very premise of the article is that tasks are needed for humans to learn and maintain skills. Learning should happen independently, it is a tautological argument that since human wont learn with agents which can do more, we should not have agents which can do more. While this is a broad and complex topic (i will share a longer blog that i am yet to fully write), I think people underestimate the cognitive load it takes to go to the higher level pattern and hence learning should happen not on the task but before the task. We are in the middle of peer vs pair sort of abstraction. Is the peer reliable enough to be delegated the task? If not, the pair design pattern should be complementary to human skill set. I sensed the frustration with ai agents came from being not fully reliable. That means a human in the loop is absolutely needed, and if there is a human, dont have ai being good at what human can do, instead it be good assistant by doing things human would need. I agree on that part, though if reliability is ironed out, for most of my tasks, i am happy ai can do the whole thing. Other frustrations stem from memory or lack of(in research), hallucinations and overconfidence, lack of situational awareness (somehow situational awareness is what agents market themselves on). If these are fixed, treating agents as a pair vs treating agents as a peer might tilt more towards the peer side. |