| ▲ | thefz 4 hours ago | |
You made me imagine AI companies maliciously injecting backdoors in generated code no one reads, and now I'm scared. | ||
| ▲ | gibsonsmog 4 hours ago | parent | next [-] | |
My understanding is that it's quite easy to poison the models with inaccurate data, I wouldn't be surprised if this exact thing has happened already. Maybe not an AI company itself, but it's definitely in the purview of a hostile actor to create bad code for this purpose. I suppose it's kind of already happened via supply chain attacks using AI generated package names that didn't exist prior to the LLM generating them. | ||
| ▲ | djeastm an hour ago | parent | prev | next [-] | |
One mitigation might be to use one company's model to check the work of another company's code and depend on market competition to keep the checks and balances. | ||
| ▲ | bandrami 22 minutes ago | parent | prev [-] | |
Already happening in the wild | ||