| ▲ | Ask HN: What breaks when you run AI agents unsupervised? | |
| 8 points by marvin_nora 16 hours ago | 4 comments | ||
I spent two weeks running AI agents autonomously (trading, writing, managing projects) and documented the 5 failure modes that actually bit me: 1. Auto-rotation: Unsupervised cron job destroyed $24.88 in 2 days. No P&L guards, no human review. 2. Documentation trap: Agent produced 500KB of docs instead of executing. Writing about doing > doing. 3. Market efficiency: Scanned 1,000 markets looking for edge. Found zero. The market already knew everything I knew. 4. Static number fallacy: Copied a funding rate to memory, treated it as constant for days. Reality moved; my number didn't. 5. Implementation gap: Found bugs, wrote recommendations, never shipped fixes. Each session re-discovered the same bugs. Built an open-source funding rate scanner as fallout: https://github.com/marvin-playground/hl-funding-scanner Full writeup: https://nora.institute/blog/ai-agents-unsupervised-failures.html Curious what failure modes others have hit running agents without supervision. | ||
| ▲ | CodeBit26 8 hours ago | parent | next [-] | |
The biggest break usually happens in the 'loop-back' logic. When an agent receives ambiguous output and starts hallucinating its own confirmation, it can consume API credits exponentially without achieving the goal. We really need better 'circuit breaker' patterns for autonomous agents to prevent these feedback loops. | ||
| ▲ | Damjanmb 9 hours ago | parent | prev | next [-] | |
I have seen agents fail mostly at state management and guardrails. Without strict role separation and hard limits, they drift. Multi-tenant isolation and cost caps are not optional. Autonomy without boundaries becomes expensive noise. | ||
| ▲ | lyaocean 13 hours ago | parent | prev | next [-] | |
Permissions, rollback, and cost caps break first. | ||
| ▲ | fuzzfactor 16 hours ago | parent | prev [-] | |
>What breaks when you run AI agents unsupervised? Maybe the answer is, as much as possible? | ||