| ▲ | blell 2 hours ago |
| There’s no malice if there was no intention of falsifying quotes. Using a flawed tool doesn’t count as intention. |
|
| ▲ | anonymous908213 2 hours ago | parent | next [-] |
| Outsourcing your job as a journalist to a chatbot that you know for a fact falsifies quotes (and everything else it generates) is absolutely intentional. |
| |
| ▲ | furyofantares 2 hours ago | parent [-] | | It's intentionally reckless, not intentionally harmful or intentionally falsifying quotes. I am sure they would have preferred if it hadn't falsified any quotes. | | |
| ▲ | blactuary 2 hours ago | parent [-] | | He's on the AI beat, if he is unaware that a chatbot will fabricate quotes and didn't verify them that is a level of reckless incompetence that warrants firing | | |
|
|
|
| ▲ | gdulli 2 hours ago | parent | prev | next [-] |
| The tool when working as intended makes up quotes. Passing that off as journalism is either malicious or unacceptably incompetent. |
|
| ▲ | kermatt 2 hours ago | parent | prev | next [-] |
| Outsourcing writing to a bot without attribution may not be malicious, but it does strain integrity. |
| |
| ▲ | InsideOutSanta 2 hours ago | parent [-] | | I don't think the article was written by an LLM; it doesn't read like it, it reads like it was written by actual people. My assumption is that one of the authors used something like Perplexity to gather information about what happened. Since Shambaugh blocks AI company bots from accessing his blog, it did not get actual quotes from him, and instead hallucinated them. They absolutely should have validated the quotes, but this isn't the same thing as just having an LLM write the whole article. I also think this "apology" article sucks, I want to know specifically what happened and what they are doing to fix it. |
|
|
| ▲ | roxolotl 2 hours ago | parent | prev | next [-] |
| The issues with such tools are highly documented though. If you’re going to use a tool with known issues you’d better do your best to cover for them. |
|
| ▲ | lapcat 2 hours ago | parent | prev | next [-] |
| > Using a flawed tool doesn’t count as intention. "Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here." They aren't allowed to use the tool, so there was clearly intention. |
|
| ▲ | andrewflnr 2 hours ago | parent | prev [-] |
| They're expected by policy to not use AI. Lying about using AI is also malice. |
| |
| ▲ | furyofantares 2 hours ago | parent | next [-] | | It's a reckless disregard for the readers and the subjects of the article. Still not malice though, which is about intent to harm. | | |
| ▲ | andrewflnr 2 hours ago | parent [-] | | Lying is intent to deceive. Deception is harm. This is not complicated. | | |
| ▲ | maxbond 2 hours ago | parent [-] | | I think you're reading a lot of intentionality into the situation what may be present, but I have not seen information confirming or really even suggesting that it is. Did someone challenge them, "was AI used in the creation of this article?" and they denied it? I see no evidence of that. Seems like ordinary, everyday corner cutting to me. I don't think that rises to the level of malice. Maybe if we go through their past articles and establish it as a pattern of behavior. That's not a defence to be clear. Journalists should be held to a higher standard than that. I wouldn't be surprised if someone with "senior" in their title was fired for something like this. But I think this malice framing is unhelpful to understanding what happened. | | |
| ▲ | andrewflnr 2 hours ago | parent [-] | | > Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here. By submitting this work they warranted that it was their own. Requiring an explicit false statement to qualify as a lie excludes many of the most harmful cases of deception. | | |
| ▲ | maxbond an hour ago | parent [-] | | Have you ever gone through a stop sign without coming to a complete stop? Was that dishonesty? You can absolutely lie through omission, I just don't see evidence that that is a better hypothesis than corner cutting in this particular case. I am open to more evidence coming out. I wouldn't be shocked to hear in a few days that there was other bad behavior from this author. I just don't see those facts in evidence, at this moment. And I think calling it malice departs from the facts in evidence. Presumably keeping to the facts in evidence is important to us all, right? That's why we all acknowledge this as a significant problem? |
|
|
|
| |
| ▲ | hibikir 2 hours ago | parent | prev [-] | | We see a typical issue in modern online media: The policy is to not use AI, but he demands of content created per day makes it very difficult to not use AI... so the end result is undisclosed AI. This is all over the old blogosphere publications, regardless of who owns them. The ad revenue per article is just not great |
|