| ▲ | NIST Seeking Public Comment on AI Agent Security (Deadline: March 9, 2026)(federalregister.gov) | |
| 41 points by ascarola 4 hours ago | 8 comments | ||
| ▲ | snowhale 3 hours ago | parent | next [-] | |
The framing of AI agent 'security' in most regulatory discussions conflates two distinct problems: (1) agent action authorization — does the agent have permission to take this action on behalf of this user, and (2) agent context integrity — is the information the agent is acting on accurate and untampered. Most current frameworks focus on (1) and miss (2). An agent that has perfect permission controls but draws from a poisoned or incomplete context window is still dangerous. For operations use cases, context integrity is arguably the harder problem — agents pulling from CRM, email, and ticketing systems simultaneously have large attack surfaces through injected data. The NIST RFI would benefit from a clearer taxonomy here. Authorization and context integrity require different mitigations. | ||
| ▲ | umairnadeem123 13 minutes ago | parent | prev | next [-] | |
the biggest gap in current agent security thinking is the lack of standardized capability scoping. right now every agent framework invents its own permission model. we need something like OAuth scopes but for agent actions - a common vocabulary for "can read files but not write", "can call APIs but not spend money", "can draft emails but not send". the drone registration analogy in the RFI is actually quite apt. for agents that can take real-world actions (deploy code, make purchases, send communications), some kind of capability manifest that can be audited before deployment would go a long way. the hard part is that agents are compositional - agent A calling agent B calling a tool creates permission chains that are hard to reason about statically. | ||
| ▲ | totetsu 3 hours ago | parent | prev | next [-] | |
With this renaming of AISI to CAISI[1], and the resignation of its founding director[2] Elizabeth Kelly, It seems that the position has sifted to, don't let any concerns about social harms stop tech companies doing what ever they want, and also lets make a show of how bad China is. I think any public comment outside of the narrow definition of AI Risk as risk to national security, might fall on deaf ears. [1] https://www.commerce.gov/news/press-releases/2025/06/stateme... [2] https://www.reuters.com/technology/us-ai-safety-institute-di... | ||
| ▲ | ascarola 4 hours ago | parent | prev | next [-] | |
NIST is requesting public input on security practices for AI agent systems - autonomous AI that can take actions affecting real-world systems (trading bots, automated operations, multi-agent coordination). Key focus areas: - Novel threats: prompt injection, behavioral hijacking, cascade failures - How existing security frameworks (STRIDE, attack trees) need to adapt - Technical controls and assessment methodologies - Agent registration/tracking (analogous to drone registration) This is specifically about agentic AI security, not general ML security - one of the first formal government RFIs on autonomous agents. Comments from practitioners deploying these systems would be valuable. Deadline: March 9, 2026, 11:59 PM ET Submit: https://www.regulations.gov/commenton/NIST-2025-0035-0001 Priority questions (if limited time): 1(a), 1(d), 2(a), 2(e), 3(a), 3(b), 4(a), 4(b), 4(d) Full 43-question RFI at link above. | ||
| ▲ | jksmith 2 hours ago | parent | prev | next [-] | |
1. Attack surface for agents is tantamount to a virus. 2. Any way for an agent to touch something is a potential compromised vector. 3. The mitigation is controlling the blast radius. 4. Sandboxing capability will have to be baked into architecture. 5. Mitigation includes measuring cost of blast radius. 6. All agent orchestration will likely require an andon cord. | ||
| ▲ | ChrisArchitect 2 hours ago | parent | prev | next [-] | |
January 8th post; A more recent release: Announcing the "AI Agent Standards Initiative" for Interoperable and Secure Innovation https://www.nist.gov/news-events/news/2026/02/announcing-ai-... | ||
| ▲ | beej71 3 hours ago | parent | prev | next [-] | |
War Operations Plan Response. | ||
| ▲ | cyanydeez 4 hours ago | parent | prev [-] | |
Best security is a proper liability process for damages caused by publically accessible LLMs followed by users. | ||