| ▲ | andrewflnr 3 hours ago |
| People put a lot of weight on blame-free post-mortems and not punishing people who make "mistakes", but I believe that has to stop at the level of malice. Falsifying quotes is malice. Fire the malicious party or everything else you say is worthless. |
|
| ▲ | jemmyw 2 hours ago | parent | next [-] |
| That don't actually say it's a blame free post-mortem, nor is it worded as such. They do say it's their policy not to publish AI generated anything unless specifically labelled. So the assumption would be that someone didn't follow policy and there will be repercussions. The problem is people on the Internet, hn included, always howl for maximalist repercussions every time. ie someone should be fired. I don't see that as a healthy or proportionate response, I hope they just reinforce that policy and everyone keeps their jobs and learns a little. |
|
| ▲ | anonymous908213 2 hours ago | parent | prev | next [-] |
| Yes. This is being treated as thought it were a mistake, and oh, humans make mistakes! But it was no mistake. Possibly it was a mistake on the part of whoever was responsible for reviewing the article before publication didn't catch it. But plagiariasm and fabrication require malicious intent, and the authors responsible engaged in both. |
|
| ▲ | blell 2 hours ago | parent | prev | next [-] |
| There’s no malice if there was no intention of falsifying quotes. Using a flawed tool doesn’t count as intention. |
| |
| ▲ | anonymous908213 2 hours ago | parent | next [-] | | Outsourcing your job as a journalist to a chatbot that you know for a fact falsifies quotes (and everything else it generates) is absolutely intentional. | | |
| ▲ | furyofantares 2 hours ago | parent [-] | | It's intentionally reckless, not intentionally harmful or intentionally falsifying quotes. I am sure they would have preferred if it hadn't falsified any quotes. | | |
| ▲ | blactuary 2 hours ago | parent [-] | | He's on the AI beat, if he is unaware that a chatbot will fabricate quotes and didn't verify them that is a level of reckless incompetence that warrants firing | | |
|
| |
| ▲ | gdulli 2 hours ago | parent | prev | next [-] | | The tool when working as intended makes up quotes. Passing that off as journalism is either malicious or unacceptably incompetent. | |
| ▲ | kermatt 2 hours ago | parent | prev | next [-] | | Outsourcing writing to a bot without attribution may not be malicious, but it does strain integrity. | | |
| ▲ | InsideOutSanta 2 hours ago | parent [-] | | I don't think the article was written by an LLM; it doesn't read like it, it reads like it was written by actual people. My assumption is that one of the authors used something like Perplexity to gather information about what happened. Since Shambaugh blocks AI company bots from accessing his blog, it did not get actual quotes from him, and instead hallucinated them. They absolutely should have validated the quotes, but this isn't the same thing as just having an LLM write the whole article. I also think this "apology" article sucks, I want to know specifically what happened and what they are doing to fix it. |
| |
| ▲ | roxolotl 2 hours ago | parent | prev | next [-] | | The issues with such tools are highly documented though. If you’re going to use a tool with known issues you’d better do your best to cover for them. | |
| ▲ | lapcat 2 hours ago | parent | prev | next [-] | | > Using a flawed tool doesn’t count as intention. "Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here." They aren't allowed to use the tool, so there was clearly intention. | |
| ▲ | andrewflnr 2 hours ago | parent | prev [-] | | They're expected by policy to not use AI. Lying about using AI is also malice. | | |
| ▲ | furyofantares 2 hours ago | parent | next [-] | | It's a reckless disregard for the readers and the subjects of the article. Still not malice though, which is about intent to harm. | | |
| ▲ | andrewflnr 2 hours ago | parent [-] | | Lying is intent to deceive. Deception is harm. This is not complicated. | | |
| ▲ | maxbond 2 hours ago | parent [-] | | I think you're reading a lot of intentionality into the situation what may be present, but I have not seen information confirming or really even suggesting that it is. Did someone challenge them, "was AI used in the creation of this article?" and they denied it? I see no evidence of that. Seems like ordinary, everyday corner cutting to me. I don't think that rises to the level of malice. Maybe if we go through their past articles and establish it as a pattern of behavior. That's not a defence to be clear. Journalists should be held to a higher standard than that. I wouldn't be surprised if someone with "senior" in their title was fired for something like this. But I think this malice framing is unhelpful to understanding what happened. | | |
| ▲ | andrewflnr an hour ago | parent [-] | | > Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here. By submitting this work they warranted that it was their own. Requiring an explicit false statement to qualify as a lie excludes many of the most harmful cases of deception. | | |
| ▲ | maxbond an hour ago | parent [-] | | Have you ever gone through a stop sign without coming to a complete stop? Was that dishonesty? You can absolutely lie through omission, I just don't see evidence that that is a better hypothesis than corner cutting in this particular case. I am open to more evidence coming out. I wouldn't be shocked to hear in a few days that there was other bad behavior from this author. I just don't see those facts in evidence, at this moment. And I think calling it malice departs from the facts in evidence. Presumably keeping to the facts in evidence is important to us all, right? That's why we all acknowledge this as a significant problem? |
|
|
|
| |
| ▲ | hibikir 2 hours ago | parent | prev [-] | | We see a typical issue in modern online media: The policy is to not use AI, but he demands of content created per day makes it very difficult to not use AI... so the end result is undisclosed AI. This is all over the old blogosphere publications, regardless of who owns them. The ad revenue per article is just not great |
|
|
|
| ▲ | Ekaros an hour ago | parent | prev | next [-] |
| At this point anyone reporting on tech should know the problems with AI. As such even if AI is used for research and articles are written on that output by human there is still absolute unquestionable expectation to do the standard manual verification of facts. Not doing it is pure malice. |
|
| ▲ | 2 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | skybrian 2 hours ago | parent | prev [-] |
| I don’t see how you could know that without more information. Using an AI tool doesn’t imply that they thought it would make up quotes. It might just be careless. Assuming malice without investigating is itself careless. |
| |
| ▲ | anonymous908213 2 hours ago | parent [-] | | we are fucking doomed holy shit we're really at the point where people are just writing off a journalist passing off their job to a chatgpt prompt as though that's a normal and defensible thing to be doing | | |
| ▲ | maxbond 2 hours ago | parent [-] | | No one said it was defensible. They drew a distinction between incompetence and malice. Let's not misquote each other here in the comments. | | |
| ▲ | anonymous908213 2 hours ago | parent [-] | | Even if it didn't fabricate quotes wholesale, taking an LLM's output and claiming it as your own writing is textbook plagiarism, which is malicious intent. Then, if you know that LLMs are next-token-prediction-engines that have no concept of "truth" and are programmed solely to generate probabilistically-likely text with no specific mechanism of anchoring to "reality" or "facts", and you use that output in a journal that (ostensibly) exists for the reason of presenting factual information to readers, you are engaging in a second layer of malicious intent. It would take an astounding level of incompetence for a tech journal writer to not be aware of the fact that LLMs do not generate factual output reliably, and it beggars belief given that one of the authors has worked at Ars for 14 years. If they are that incompetent, they should probably be fired on that basis anyways. But even if they are that incompetent, that still only covers one half of their malicious intent. | | |
| ▲ | maxbond 2 hours ago | parent [-] | | The article in question appears to me to be written by a human (excluding what's in quotation marks), but of course neither of us has a crystal ball. Are there particular parts of it that you would flag as generated? Honestly I'm just not astounded by that level of incompetence. I'm not saying I'm impressed or that's it's okay. But I've heard much worse stories of journalistic malpractice. It's a topical, disposable article. Again, that doesn't justify anything, but it doesn't surprise me that a short summary of a series of forum exchanges and blog posts was low effort. | | |
| ▲ | anonymous908213 an hour ago | parent [-] | | I don't believe there is any greater journalistic malpractice than fabrication. Sure, there are worse cases of such malpractice in the world given the low importance of the topic, but journalists should be reporting the truth on anything they deem important enough to write about. Cutting corners on the truth, of all things, is the greatest dereliction of their duty, and undermines trust in journalism altogether, which in turn undermines our collective society as we no longer work from a shared understanding of reality owing to our inability to trust people who report on it. I've observed that journalists tend to have unbelievably inflated egos and tout themselves as the fourth estate that upholds all of free society, and yet their behaviour does not actually comport with that and is rather actively detrimental in the modern era. I also do not believe this was a genuine result of incompetence. I entertained that it is possible, but that would be the most charitable view possible, and I don't think the benefit of doubt is earned in this case. They routinely cover LLM stories, the retracted article being about that very subject matter, so I have very little reason to believe they are ignorant about LLM hallucinations. If it were a political journalist or something, I would be more inclined to give the ignorance defense credit, but as it is we have every reason to believe they know what LLMs are and still acted with intention, completely disregarding the duty they owe to their readers to report facts. | | |
| ▲ | maxbond an hour ago | parent [-] | | > I don't believe there is any greater journalistic malpractice than fabrication. Sure, there are worse cases of such malpractice... That's more or less what I mean. It was only a few notches above listicle to begin with. I don't think they intended to fabricate quotes. I think they didn't take the necessary time because it's a low-stakes, low-quality article to begin with. With a short shelf life, so it's only valuable if published quickly. > I also do not believe this was a genuine result of incompetence. So your hypothesis is that they intentionally made up quotes that were pretty obviously going to be immediately spotted and damage their career? I don't think you think that, but I don't understand what the alternative you're proposing is. I also feel compelled to point out you've abandoned your claim that the article was generated. I get that you feel passionately about this, and you're right to be passionate about accuracy, but I think that may be leading you into ad-hoc argumentation rather than more rational appraisal of the facts. I think there's a stronger and more coherent argument for your position that you've not taken the time to flesh out. That isn't really a criticism and it isn't my business, but I do think you ought to be aware of it. I really want to stress that I don't think you're wrong to feel as you do and the author really did fuck up. I just feel we, as a community in this thread, are imputing things beyond what is in evidence and I'm trying to push back on that. | | |
| ▲ | anonymous908213 an hour ago | parent [-] | | What I'm saying is that I believe they do not care about the truth, and intentionally chose to offload their work to LLMs, knowing that LLMs do not produce truth, because it does not matter to them. Is there any indication that this has damaged their career in any way? It seems to me that it's likely they do not care about the truth because Ars Technica does not care about the truth, as long as the disregard isn't so blatant that it causes a PR issue. > I also feel compelled to point out you've abandoned your claim that the article was generated. As you've pointed out, neither of us has a crystal ball, and I can't definitively prove the extent of their usage. However, why would I have any reason to believe their LLM usage stops merely at fabricating quotes? I think you are again engaging in the most charitable position possible, for things that I think are probably 98 or 99% likely to be the result of malicious intent. It seems overwhelmingly likely to me that someone who prompts an LLM to source their "facts" would also prompt an LLM to write for them - it doesn't really make sense to be opposed to using an LLM to write on your behalf but not be opposed to it sourcing stories on your behalf. All the more so if your rationale as the author is that the story is unimportant, beneath you, and not worth the time to research. | | |
| ▲ | maxbond an hour ago | parent [-] | | > I think you are again engaging in the most charitable position possible, ... Yeah, that's accurate. I will turn a dime the moment I receive evidence that this was routine for this author or systemic for Ars. But yes, I'm assuming good faith (especially on Ars' part), and that's generally how I operate. I guess I'm an optimist, and I guess I can't ask you to be one. |
|
|
|
|
|
|
|
|