Remix.run Logo
jameslk 4 hours ago

One safety pattern I’m baking into CLI tools meant for agents: anytime an agent could do something very bad, like email blast too many people, CLI tools now require a one-time password

The tool tells the agent to ask the user for it, and the agent cannot proceed without it. The instructions from the tool show an all caps message explaining the risk and telling the agent that they must prompt the user for the OTP

I haven't used any of the *Claws yet, but this seems like an essential poor man's human-in-the-loop implementation that may help prevent some pain

I prefer to make my own agent CLIs for everything for reasons like this and many others to fully control aspects of what the tool may do and to make them more useful

ezst 2 hours ago | parent | next [-]

Now we do computing like we play Sim City: sketching fuzzy plans and hoping those little creatures behave the way we thought they might. All the beauty and guarantees offered by a system obeying strict and predictable rules goes down the drain, because life's so boring, apparently.

SV_BubbleTime 2 hours ago | parent [-]

We spent a ton of time removing subjectivity from this field… only to forcefully shove it in and punish it for giving repeatable objective responses. Wild.

sowbug 3 hours ago | parent | prev | next [-]

Another pattern would mirror BigCorp process: you need VP approval for the privileged operation. If the agent can email or chat with the human (or even a strict, narrow-purpose agent(1) whose job it is to be the approver), then the approver can reply with an answer.

This is basically the same as your pattern, except the trust is in the channel between the agent and the approver, rather than in knowledge of the password. But it's a little more usable if the approver is a human who's out running an errand in the real world.

1. Cf. Driver by qntm.

dingaling an hour ago | parent [-]

Until the agent decides that it's more efficient to fake an approval, and carries on...

ZitchDog 4 hours ago | parent | prev | next [-]

I've created my own "claw" running in fly.io with a pattern that seems to work well. I have MCP tools for actions that I want to ensure human-in-the loop - email sending, slack message sending, etc. I call these "activities". The only way for my claw to execute these commands is to create an activity which generates a link with the summary of the acitvity for me to approve.

good-idea 3 hours ago | parent [-]

Any chance you have a repo to share?

aqme28 4 hours ago | parent | prev | next [-]

How do you enforce this? You have a system where the agent can email people, but cannot email "too many people" without a password?

jameslk 3 hours ago | parent [-]

It's not a perfect security model. Between the friction and all caps instructions the model sees, it's a balance between risk and simplicity, or maybe risk and sanity. There's ways I can imagine the concept can be hardened, e.g. with a server layer in between that checks for things like dangerous actions or enforces rate limiting

sowbug 3 hours ago | parent | next [-]

If I were the CEO of a place like Plaid, I'd be working night and day expanding my offerings to include a safe, policy-driven API layer between the client and financial services.

chongli 3 hours ago | parent | prev [-]

What if instead of allowing the agent to act directly, it writes a simple high-level recipe or script that you can accept (and run) or reject? It should be very high level and declarative, but with the ability to drill down on each of the steps to see what's going on under the covers?

roberttod 2 hours ago | parent | prev | next [-]

I created my own version with an inner llm, and outer orchestration layer for permissions. I don't think the OTP is needed here? The outer layer will ping me on signal when a tool call needs a permission, and an llm running in that outer layer looks at the trail up to that point to help me catch anything strange. I can then give permission once/ for a time limit/ forever on future tool calls.

IMTDb 4 hours ago | parent | prev | next [-]

So human become just a provider of those 6 digits code ? That’s already the main problem i have with most agents: I want them to perform a very easy task: « fetch all recepts from website x,y and z and upload them to the correct expense of my expense tracking tool ». Ai are perfectly capable of performing this. But because every website requires sso + 2 fa, without any possibility to remove this, so i effectively have to watch them do it and my whole existence can be summarized as: « look at your phone and input the 6 digits ».

The thing i want ai to be able to do on my behalf is manage those 2fa steps; not add some.

akssassin907 an hour ago | parent | next [-]

This is where the Claw layer helps — rather than hoping the agent handles the interruption gracefully, you design explicit human approval gates into the execution loop. The Claw pauses, surfaces the 2FA prompt, waits for input, then resumes with full state intact. The problem IMTDb describes isn't really 2FA, it's agents that have a hard time suspending and resuming mid-task cleanly. But that is today, tomorrow, that is an unknown variable.

walterbell 3 hours ago | parent | prev [-]

It's technically possible to use 2FA (e.g. TOTP) on the same device as the agent, if appropriate in your threat model.

In the scenario you describe, 2FA is enforcing a human-in-the-loop test at organizational boundaries. Removing that test will need an even stronger mechanism to determine when a human is needed within the execution loop, e.g. when making persistent changes or spending money, rather than copying non-restricted data from A to B.

soleveloper 2 hours ago | parent | prev | next [-]

Will that protect you from the agent changing the code to bypass those safety mechanisms, since the human is "too slow to respond" or in case of "agent decided emergency"?

UncleMeat 2 hours ago | parent | prev [-]

Does it actually require an OTP or is this just hoping that the agent follows the instructions every single time?