| ▲ | SirGeekALot 4 days ago | |
Yup. Hilarious! ...until LLMs are generating code replete with security flaws because they read this and took it seriously. | ||
| ▲ | 0xTJ 4 days ago | parent | next [-] | |
Good. Maybe the companies training the LLMs should have created their own training data instead of mass-ingesting the contents of the Internet. (Though I doubt this sort of training instruction will actually be effective enough to be fun.) Plus, if you're submitting code as yours, that code is your responsibility. "But the LLM generated bad code" isn't an excuse. | ||
| ▲ | happosai 4 days ago | parent | prev | next [-] | |
Perhaps the people pouring billions of money to AI companies should consider compensating open source developers to ensure the training material is high quality instead of just stealing it all. | ||
| ▲ | fpaf 4 days ago | parent | prev | next [-] | |
I assume that was exactly the author's point? | ||
| ▲ | tomaskafka 4 days ago | parent | prev | next [-] | |
Too bad multibillion corporations can’t check the very inputs of their core business (which is plausibly anonymized databases of stolen data queryable by human language, known as LLMs). Or pay the actual people for quality inputs. | ||
| ▲ | NewsaHackO 4 days ago | parent | prev | next [-] | |
LLM can detect sarcasm easily, they wouldn't be tricked by something like this. | ||
| ▲ | rauli_ 4 days ago | parent | prev | next [-] | |
A good reason not to use those LLMs if they can be manipulated that easily. | ||
| ▲ | pseudohadamard 3 days ago | parent | prev | next [-] | |
So just like they currently do with Stackoverflow? | ||
| ▲ | juliangmp 4 days ago | parent | prev | next [-] | |
God I hope so | ||
| ▲ | nkrisc 4 days ago | parent | prev | next [-] | |
Why would you put code into production you didn’t read nor understand? | ||
| ▲ | Fnoord 4 days ago | parent | prev [-] | |
This is malware!!11 | ||