| ▲ | Habgdnv 3 hours ago | ||||||||||||||||
Ok, I am getting mad now. I don't understand something here. Should we open like 31337 different CVEs about every possible LLM on the market and tell them that we are super-ultra-security-researchers and we're shocked when we found out that <model name> will execute commands that it is given access to, based on the input text that is feed into the model? Why people keep doing these things? Ok, they have free time to do it and like to waste other's people time. Why is this article even on HN? How is this article in the front page? "Shocking news - LLMs will read code comments and act on them as if they were instructions". | |||||||||||||||||
| ▲ | simonw 2 hours ago | parent | next [-] | ||||||||||||||||
This isn't a bug in the LLMs. It's a bug in the software that uses those LLMs. An LLM on its own can't execute code. An LLM harness like Antigravity adds that ability, and if it does it carelessly that becomes a security vulnerability. | |||||||||||||||||
| |||||||||||||||||
| ▲ | Wolfenstein98k 3 hours ago | parent | prev [-] | ||||||||||||||||
Isn't the problem here that third parties can use it as an attack vector? | |||||||||||||||||
| |||||||||||||||||