| ▲ | Habgdnv 2 hours ago | |
The problem is a bit wider than that. One can frame it as "google gemini is vulterable" or "google's new VS code clone is vulnerable". The bigger picture is that the model predicts tokens (words) based on all the text it have. In a big codebase it becomes exponentially easier to mess the model's mind. At some point it will become confused what is his job. What is part of the "system prompt" and "code comments in the codebase" becomes blurry. Even the models with huge context windows get confused because they do not understand the difference between your instructions and "injected instructions" in a hidden text in the readme or in code comments. They see tokens and given enough malicious and cleverly injected tokens the model may and often will do stupid things. (The word "stupid" means unexpected by you) People are giving LLMs access to tools. LLMs will use them. No matter if it's Antigravity, Aider, Cursor, some MCP. | ||
| ▲ | danudey 2 hours ago | parent [-] | |
I'm not sure what your argument is here. We shouldn't be making a fuss about all these prompt injection attacks because they're just inevitable so don't worry about it? Or we should stop being surprised that this happens because it happens all the time? Either way I would be extremely concerned about these use cases in any circumstance where the program is vulnerable and rapid, automatic or semi-automatic updates aren't available. My Ubuntu installation prompts me every day to install new updates, but if I want to update e.g. Kiro or Cursor or something it's a manual process - I have to see the pop-up, decide I want to update, go to the download page, etc. These tools are creating huge security concerns for anyone who uses them, pushing people to use them, and not providing a low-friction way for users to ensure they're running the latest versions. In an industry where the next prompt injection exploit is just a day or two away, rapid iteration would be key if rapid deployment were possible. | ||