| ▲ | imiric 6 hours ago | |
That's quickly becoming difficult to determine. The workflow of starting dozens or hundreds of "agents" that work autonomously is starting to gain traction. The goal of people who work like this is to completely automate software development. At some point they want to be able to give the tool an arbitrary task, presumably one that benefits them in some way, and have it build, deploy, and use software to complete it. When millions of people are doing this, and the layers of indirection grow in complexity, how do you trace the result back to a human? Can we say that a human was really responsible for it? Maybe this seems simple today, but the challenges this technology forces on society are numerous, and we're far from ready for it. | ||
| ▲ | niyikiza 6 hours ago | parent | next [-] | |
This is the problem we're working on. When orchestrators spawn sub-agents spawn tools, there's no artifact showing how authority flowed through the chain. Warrants are a primitive for this: signed authorization that attenuates at each hop. Each delegation is signed, scope can only narrow, and the full chain is verifiable at the end. Doesn't matter how many layers deep. | ||
| ▲ | Wobbles42 6 hours ago | parent | prev [-] | |
Translation: People want to use a tool and not be liable for the result. People not wanting to be liable for their actions is not new. AI hasn't changed anything here, it's just a new lame excuse. | ||