| ▲ | joenot443 9 hours ago |
| > One of those unprotected endpoints wrote user search queries to the database. The values were safely parameterised, but the JSON keys — the field names — were concatenated directly into SQL. I was expecting prompt injection, but in this case it was just good ol' fashioned SQL injection, possible only due to the naivety of the LLM which wrote McKinsey's AI platform. |
|
| ▲ | simonw 9 hours ago | parent | next [-] |
| Yeah, gotta admit I'm a bit disappointed here. This was a run-of-the-mill SQL injection, albeit one discovered by a vulnerability scanning LLM agent. I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to. |
| |
| ▲ | jfkimmes 8 hours ago | parent | next [-] | | Not the same league as McKinsey, but I like to point to this presentation to show the effects of a (vibe coded) prompt injection vulnerability: https://media.ccc.de/v/39c3-skynet-starter-kit-from-embodied... > [...] we also exploit the embodied AI agent in the robots, performing prompt injection and achieve root-level remote code execution. | |
| ▲ | TheDong 8 hours ago | parent | prev | next [-] | | Github actions has had a bunch of high-profile prompt injection attacks at this point, most recently the cline one: https://adnanthekhan.com/posts/clinejection/ I guess you could argue that github wasn't vulnerable in this case, but rather the author of the action, but it seems like it at least rhymes with what you're looking for. | | |
| ▲ | simonw 7 hours ago | parent [-] | | Yeah that was a good one. The exploit was still a proof of concept though, albeit one that made it into the wild. |
| |
| ▲ | danenania 8 hours ago | parent | prev [-] | | > I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to. These folks have found a bunch: https://www.promptarmor.com/resources But I guess you mean one that has been exploited in the wild? | | |
| ▲ | simonw 7 hours ago | parent [-] | | Yeah I'm still optimistic that people will start taking this threat seriously once there's been a high profile exploit against a real target. |
|
|
|
| ▲ | 3abiton 5 hours ago | parent | prev | next [-] |
| I just wonder how much professional grade code written by LLMs, "reviewed" by devs, and commited that made similar or worse mistakes. A funny consequence of the AI boom, especially in coding, is the eventual rise in need for security researchers. |
| |
| ▲ | IshKebab 3 hours ago | parent [-] | | In fairness although "the industry" learns best practices like using SQL prepared statements, not sanitising via blacklists, CSFR, etc. there's a constant new stream of new programmers who just never heard of these things. It doesn't help that often when these things are realised the only way we prevent it in future is by talking about it, which doesn't work for newbies. Nobody goes and fixes SQL APIs so that you can only pass compile-time constant strings as the statement or whatever. Newbies just have to magically know to do that. |
|
|
| ▲ | doctorpangloss 8 hours ago | parent | prev | next [-] |
| The tacit knowledge to put oauth2-proxy in front of anything deployed on the Internet will nonetheless earn me $0 this year, while Anthropic will make billions. |
|
| ▲ | oliver_dr 9 hours ago | parent | prev [-] |
| [dead] |