| ▲ | jsheard 7 hours ago | |
The problem is less with the style itself and more that it's strongly associated with low-effort content which is going to waste the readers time. It would be nice to be able to give everything the benefit of the doubt, but humans have finite time and LLMs have infinite capacity for producing trite or inaccurate drivel, so readers end up reflexively using LLM tells as a litmus test for (lack of) quality in order to cut through the noise. You might say well, it's on the Cloudflare blog so it must have some merit, but after the Matrix incident... | ||
| ▲ | guntars 5 hours ago | parent | next [-] | |
These AI signals will die out soon. The models are overusing actual human writing patterns, the humans are noticing and changing how they write, the models are updated, new patterns emerge, etc, etc. The best signal for the quality of writing will always be the source, even if they are "just" prompting the model. I think we can let one incident slide, but they are on notice. | ||
| ▲ | azangru 7 hours ago | parent | prev [-] | |
> You might say well, it's on the Cloudflare blog so it must have some merit I would instead say that it is written by James Snell, who is one of the central figures in the Node community; and therefore it must have some merit. | ||