| ▲ | dec0dedab0de 2 hours ago | |
What if you had it ask for another demonstration when things are different? or if it's different and taking more than X amount of time to figure out. Like an actual understudy would. | ||
| ▲ | bayes-song 2 hours ago | parent [-] | |
That sounds like a good idea. During the use of a skill, if the agent finds something unclear, it could proactively ask the user for clarification and update the skill accordingly. This seems like a very worthwhile direction to explore. In the current system, I have implemented a periodic sweep over all sessions to identify completed tasks, cluster those tasks, and summarize the different solution paths within each cluster to extract a common path and proactively add it as a new skill. However, so far this process only adds new skills and does not update existing ones. Updating skills based on this feedback loop seems like something worth pursuing. | ||