| ▲ | rablackburn 9 hours ago | |
> It would be willfully ignorant to pretend that there's not an explosion of a novel and specific kind of stupidity I 100% know what you mean, and largely agree, but you should check out the guidelines, specifically: > Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative. And like, the problem _is_ *bad*. A fun, on-going issue at work is trying to coordinate with a QA team who believe chatgpt can write css selectors for HTML elements that are not yet written. That same QA team deeply care about the spirit of their work, and are motivated by, the _very_ relatable sentiment of, you DONT FUCKING BREAK USER SPACE. Yeah, in the unbridled, chaotic, raging plasma that is our zeitgeist at the moment, I'm lucky enough to have people dedicating a significant portion of their life to trying to do quality assurance in the idiomatic, industry best-standard way. Blame the FUD, not my team. I would put to you that the observation that they do not (yet) grok what, for lack of a more specific universally understood term we are calling, "AI" (or LLMs if you are Fancy. But of course none of these labels are quite right). People need time to observe, and learn. And people are busy with /* gestures around vaguely at everything /*. So yes, we should acknowledge that long-winded trash PRs from AI are a new emergent problem, and yes, if we study the specific problem more closely we will almost certainly find ever more optimal approaches. Writing off the issue as "stupidity" is mean. In both senses.  | ||
| ▲ | watwut 43 minutes ago | parent [-] | |
I do not think that is being curmudgeonly. Instead, OP is absolutely right. We collectively used the strategy of "we pretend we are naively stupid and dont talk directly about issues" in multiple areas ... and it failed every single time in all of them. It never solves the problem, it just invites to bad/lazy/whatever actors to play semantic manipulative games.  | ||