Remix.run Logo
mhher 2 hours ago

The current hype around agentic workflows completely glosses over the fundamental security flaw in their architecture: unconstrained execution boundaries. Tools that eagerly load context and grant monolithic LLMs unrestricted shell access are trivial to compromise via indirect prompt injection.

If an agent is curling untrusted data while holding access to sensitive data or already has sensitive data loaded into its context window, arbitrary code execution isn't a theoretical risk; it's an inevitability.

As recent research on context pollution has shown, stuffing the context window with monolithic system prompts and tool schemas actively degrades the model's baseline reasoning capabilities, making it exponentially more vulnerable to these exact exploits.

kzahel 2 hours ago | parent | next [-]

I think this is basically obvious to anyone using one of these but they're just they like the utility trade off like sure it may leak and exfiltrate everything somewhere but the utility of these tools is enough where they just deal with that risk.

mhher 2 hours ago | parent [-]

While I understand the premise I think this is a highly flawed way to operate these tools. I wouldn't want to have someone with my personal data (whichever part) that might give it to anyone who just asks nicely because the context window has reached a tipoff point for the models intelligence. The major issue is a prompt attack may have taken place and you will likely never find out.

dgellow 2 hours ago | parent | prev [-]

could you share that study?

mhher 2 hours ago | parent | next [-]

https://arxiv.org/abs/2512.13914

Among many more of them with similar results. This one gives a 39% drop in performance.

https://arxiv.org/abs/2506.18403

This one gives 60-80% after multiple turns.

2 hours ago | parent | prev [-]
[deleted]