Remix.run Logo
Show HN: Arden – Runtime policy enforcement and governance for AI agents(arden.sh)
5 points by rishabtandon 4 hours ago | 5 comments
rishabtandon 3 hours ago | parent | next [-]

I kept seeing the same pattern for agents - they get access to sensitive APIs and data sources, and then take unsafe actions (like issuing large refunds or deleting production databases). I built Arden to solve this problem.

All it takes is 2 lines of code to integrate with your favourite framework - LangChain, CrewAI, Agents SDK or a custom agent integration if you're into that. Once added, you get benefits of end-to-end observability - every single tool call is logged, you can set policies (or boundaries) to restrict your agents from taking certain actions or accessing certain resources and you can see actual token usage from each agent session.

Some interesting findings after talking to people:

- Agent guardrails cannot be static. They need to evolve with the agent's actions and the business' needs. Arden will learn from static policies + action history to build dynamic guardrails

- Some agent sessions cost more than others. Agents sometimes go in a loop on certain edge cases and end up burning way more tokens than usual. Arden gives you the data to optimize your prompts/workflows to minimize your spend

Free up to 10k actions/month. Would love feedback from anyone building production agents.

swapnakm15 an hour ago | parent | prev | next [-]

I was going with this approach next for my projects. good to see these are already experimented.

rishabtandon an hour ago | parent [-]

Yeah I tried out a few things and concluded on this after talking to users. Try it out for your project and let me know if you have any feedback

xavieragostini 4 hours ago | parent | prev [-]

Will this prevent Claude from deleting my production database? Congrats on the launch! Will check this out, been looking for a way to automate this

rishabtandon 4 hours ago | parent [-]

Thank you!! Yes - that's the goal of this project. We're trying to stop agents from taking unsafe actions. You can either block actions or tag them for human approval