Remix.run Logo
mrandish 2 hours ago

When an article is retracted it's standard to at least mention the title and what specific information was incorrect so that anyone who may have read, cited or linked it is informed what information was inaccurate. That's actually the point of a retraction and without it this non-standard retraction has no utility except being a fig leaf for Ars to prevent external reporting becoming a bigger story.

In the comments I found a link to the retracted article: https://arstechnica.com/ai/2026/02/after-a-routine-code-reje.... Now that I know which article, I know it's one I read. I remember the basic facts of what was reported but I don't recall the specifics of any quotes. Usually quotes in a news article support or contextualize the related facts being reported. This non-standard retraction leaves me uncertain if all the facts reported were accurate.

It's also common to provide at least a brief description of how the error happened and the steps the publication will take to prevent future occurrences.. I assume any info on how it happened is missing because none of it looks good for Ars but why no details on policy changes?

Edit to add more info: I hadn't yet read the now-retracted original article on achive.org. Now that I have I think this may be much more interesting than just another case of "lazy reporter uses LLM to write article". Scott, the person originally misquoted, also suspects something stranger is going on.

> "This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed." https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

My theory is a bit different than Scott's: Ars appears to use an automated tool which adds text links to articles to increase traffic to any related articles already on Ars. If that tool is now LLM-based to allow auto-generating links based on concepts instead of just keywords, perhaps it mistakenly has unconstrained access to changing other article text! If so, it's possible the author and even the editors may not be at fault. The blame could be on the Ars publishers using LLMs to automate monetization processes downstream of editorial. Which might explain the non-standard vague retraction. If so, that would make for an even more newsworthy article that's directly within Ars' editorial focus.

apparent an hour ago | parent | next [-]

In the case of hallucinated quotes, I think the more important aspect is to describe how this happened, whether the author is a regular contributor, how the editors missed it, and what steps are being taken to prevent it from happening in the future.

It's good to issue a correction, and in this case to retract the article. But it doesn't really give me confidence going forward, especially where this was flagged because the misquoted person raised the issue. It's not like Ars' own processes somehow unearthed this error.

It makes me think I should get in the habit of reading week-old Ars articles, whose errors would likely have been caught by early readers.

mrandish an hour ago | parent [-]

> It's not like Ars' own processes somehow unearthed this error.

It might be even worse (and more interesting) than that. I just posted a sister response outlining why I now suspect the fabrication may have actually been caused by Ars' own process. https://news.ycombinator.com/item?id=47027370. Hence, the odd non-standard retraction.

qnleigh 2 hours ago | parent | prev | next [-]

Yes I just read the retracted article and I can't find anything that I knew was false. What were the fabricated quotes?

trevwilson 2 hours ago | parent | next [-]

This blog post from the person who was falsely quoted has screenshots and an archive link: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

mrandish an hour ago | parent | prev [-]

I was wondering the same thing. After I posted above, I followed the archive.org link to the original article and did a quick search on the last four quotes, which the article claims are from Scott's blog. None appear on the linked blog page. The first quote the article claims is from Scott does appear on the linked Github comments page.

When I wrote my post above, I hadn't yet read the original article on achive.org. Now that I know the article actually links to the claimed original sources on Scott's blog and Github for all the fabricated quotes, how this could have happened is even more puzzling. Now I think this may be much more interesting than just another case of "lazy reporter uses LLM to write article".

Ars appears to use an automated tool which adds text links to articles to increase traffic to any related articles already on Ars. If that tool is now LLM-based to allow auto-generating links based on concepts instead of just keywords, perhaps it mistakenly has unconstrained access to changing other article text! If so, it's possible the author and even the editors may not be at fault. The blame could be on the Ars publisher's using LLM's to automate monetization processes downstream of editorial. Which might explain the non-standard vague retraction. If so, that would make for an even more newsworthy article that's directly within Ars' editorial focus.

morninglight 41 minutes ago | parent | prev [-]

This is not a retraction. It is just CYA - Cover your Arse Technica.

They need to enumerate the specific details they fudged.

They need to correct any inaccuracies.

Otherwise, there is little reason to trust Arse Technica in the future.