| ▲ | cookiengineer 2 days ago | |||||||||||||||||||||||||||||||||||||||||||
Well, agents can't discover bypass attacks because they don't have memory. That was what DNCs [1] (Differentiable Neural Computers) tried to accomplish. Correlating scan metrics with analytics is btw a great task for DNCs and what they are good at due to how their (not so precise) memory works. Not so much though at understanding branch logic and their consequences. However, I currently believe that forensic investigations will change post LLMs, because they're very good at translating arbitrary bytecode, assembly, netasm, intel asm etc syntax to example code (in any language). It doesn't have to be 100% correct in those translations, that's why LLMs can be really helpful for the discovery phase after an incident. Check out the ghidra MCP server which is insane to see real-time [2] | ||||||||||||||||||||||||||||||||||||||||||||
| ▲ | KurSix 2 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||
The lack of memory issue is already being solved architecturally, and ARTEMIS is a prime example. Instead of relying on the model's context window (which is "leaky"), they use structured state passed between iterations. It's not a DNC per se, but it is a functional equivalent of long-term memory. The agent remembers it tried an SQL injection an hour ago not because it's in the context, but because it's logged in its knowledge base. This allows for chaining exploits, which used to be the exclusive domain of humans | ||||||||||||||||||||||||||||||||||||||||||||
| ▲ | tptacek 2 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||
Can you be more specific about the kind of "bypass attack" you think an agent can't find? Like, provide a schematic example? | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||