| ▲ | gerdesj a day ago |
| My current favourite LLM wankery example is this beauty: https://blog.fahadusman.com/proxmox-replacing-failed-drive-i... Note how it has invented the faster parameter for the zpool command. It is possible that the blog writer hallucinated a faster parameter themselves without needing a LLM - who knows. I think all developers should add a faster parameter to all commands to make them run faster. Perhaps a LLM could create the faster code. I predict an increase of man page reading, and better quality documentation at authoritative sources. We will also improve our skills at finding auth sources of docs. My uBlacklist is getting quite long. |
|
| ▲ | rotis 16 hours ago | parent | next [-] |
| How can this article be written by LLM? Its date is November 2021. Not judging the article as a whole but the command you pointed out seems to be correct. Faster is the name of the pool. |
| |
| ▲ | gruez 6 hours ago | parent | next [-] | | >Its date is November 2021 The date can be spoofed. It first showed up on archive.org in December 2022, and there's no captures for the site before then, so I'm liable to believe the dates are spoofed. | |
| ▲ | bdhcuidbebe 7 hours ago | parent | prev | next [-] | | There was alot going on in the years before ChatGPT. Text generation was going strong with interactive fiction before anyone were talking about OpenAI. | |
| ▲ | selcuka 12 hours ago | parent | prev | next [-] | | GPT-2 was released in 2019. ChatGPT wasn't the first publicly available LLM. | |
| ▲ | victorbjorklund 16 hours ago | parent | prev [-] | | I used LLM:s for content generation in july 2021. Of course that was when LLM:s were pretty bad. |
|
|
| ▲ | Henchman21 a day ago | parent | prev [-] |
| What makes you think this was created by an LLM? I suspect they might actually have a pool named faster -- I know I've named pools similarly in the past. This is why I now name my pools after characters from the Matrix, as is tradition. |
| |
| ▲ | taurath a day ago | parent | next [-] | | This really gets to an acceleration of enshittification. If you can't tell its an LLM, and there's nobody to verify the information, humanity is architecting errors and mindfucks into everything. All of the markers of what is trustworthy has been coopted by untrustworthy machines, so all of the way's we'd previously differentiated actors have stopped working. It feels like we're just losing truth as rapidly as LLMs can generate mistakes. We've built a scoundrels paradise. How useful is a library of knowledge when n% of the information is suspect? We're all about to find out. | | |
| ▲ | Henchman21 21 hours ago | parent [-] | | You know, things looked off to me, but thinking it was the output of an LLM just didn't seem obvious -- even though that was the claim! I feel ill-equipped to deal with this, and as the enshittification has progressed I've found myself using "the web" less and less. At this point, I'm not sure there's much left I value on the web. I wish the enshittification wasn't seemingly pervasive in life. | | |
| ▲ | taurath 16 hours ago | parent [-] | | I believe in people, but I start to think that scrolling is the Fox News or AM radio of a new generation, it just happens to be the backbone of the economy because automation is so much cheaper than people. |
|
| |
| ▲ | lloeki 18 hours ago | parent | prev [-] | | The pool is named backups according to zpool status and the paragraph right after. But then again the old id doesn't match between the two commands. | | |
|