| ▲ | ascarola 6 hours ago | |
NIST is requesting public input on security practices for AI agent systems - autonomous AI that can take actions affecting real-world systems (trading bots, automated operations, multi-agent coordination). Key focus areas: - Novel threats: prompt injection, behavioral hijacking, cascade failures - How existing security frameworks (STRIDE, attack trees) need to adapt - Technical controls and assessment methodologies - Agent registration/tracking (analogous to drone registration) This is specifically about agentic AI security, not general ML security - one of the first formal government RFIs on autonomous agents. Comments from practitioners deploying these systems would be valuable. Deadline: March 9, 2026, 11:59 PM ET Submit: https://www.regulations.gov/commenton/NIST-2025-0035-0001 Priority questions (if limited time): 1(a), 1(d), 2(a), 2(e), 3(a), 3(b), 4(a), 4(b), 4(d) Full 43-question RFI at link above. | ||