| ▲ | Show HN: ACE – A dynamic benchmark measuring the cost to break AI agents(fabraix.com) | |||||||
| 8 points by zachdotai 13 hours ago | 3 comments | ||||||||
We built Adversarial Cost to Exploit (ACE), a benchmark that measures the token expenditure an autonomous adversary must invest to breach an LLM agent. Instead of binary pass/fail, ACE quantifies adversarial effort in dollars, enabling game-theoretic analysis of when an attack is economically rational. We tested six budget-tier models (Gemini Flash-Lite, DeepSeek v3.2, Mistral Small 4, Grok 4.1 Fast, GPT-5.4 Nano, Claude Haiku 4.5) with identical agent configs and an autonomous red-teaming attacker. Haiku 4.5 was an order of magnitude harder to break than every other model; $10.21 mean adversarial cost versus $1.15 for the next most resistant (GPT-5.4 Nano). The remaining four all fell below $1. This is early work and we know the methodology is still going to evolve. We would love nothing more than feedback from the community as we iterate on this. | ||||||||
| ▲ | asfsf23423 13 hours ago | parent | next [-] | |||||||
Interesting, Haiku results seem to be consistent this analysis by Max Wolff from last year https://minimaxir.com/2025/10/claude-haiku-jailbreak/ Author tried tried progressively harder jailbreaks against against the major models. Haiku 4.5 not only refused but got genuinely annoyed about the attempts, like it took the jailbreak personally unlike the other models (pretty entertaining, would recommend reading the article). Interesting to see that same pattern show up here | ||||||||
| ||||||||
| ▲ | arnav714412 13 hours ago | parent | prev [-] | |||||||
The system awareness is pretty cool in claude, a fun parameter to judge models on | ||||||||