▲ | blahgeek 3 days ago | |
The worst nightmare would be the maintainers in turn use large language model to review or apply these patches | ||
▲ | szszrk 3 days ago | parent | next [-] | |
I already have some processes at work that are reviewed by AI only. Which means we are advised to use another AI to fill out the data quicker. It's nothing critical, but still both scary and hilarious at the same time. Shit on the input, shit on the output - nothing new, just fancier tools. Asimov's vision of history so tangled and noisy that no one really knows what is truth and what is a legend is happening in front of our own eyes. It didn't need millennia, just a few years of AI companies abusing our knowledge that was available for anyone for free. | ||
▲ | the_biot 3 days ago | parent | prev | next [-] | |
Not to one-up you, but my worst nightmare is an open source project where all the maintainers are LLM copy-pasters, with little clue to be had otherwise. And it's already happened, of course. A project I saw mentioned here on HN a while back seemed interesting, and it was exactly that kind of disaster. They started off as a fork of another project, so had a working codebase. But the project lead is a grade-A asshole who gets off on being grumpy to people, and considers any ideas not his to be ridiculous. Their kernel guy is an actual moron; his output is either (clearly) LLM output or just idiocies. Even the side contributors are 100% chatbot pasters. | ||
▲ | signa11 3 days ago | parent | prev [-] | |
and then have another one duke it out with the first one to reject the patch. that would be a nice llm-vs-llm, prompt-fight-prompt :o) |