| ▲ | Springtime 8 hours ago |
| Ars Technica being caught using LLMs that hallucinated quotes by the author and then publishing them in their coverage about this is quite ironic here. Even on a forum where I saw the original article by this author posted someone used an LLM to summarize the piece without having read it fully themselves. How many levels of outsourcing thinking is occurring to where it becomes a game of telephone. |
|
| ▲ | sho_hn 6 hours ago | parent | next [-] |
| Also ironic: When the same professionals advocating "don't look at the code anymore" and "it's just the next level of abstraction" respond with outrage to a journalist giving them an unchecked article. Read through the comments here and mentally replace "journalist" with "developer" and wonder about the standards and expectations in play. Food for thought on whether the users who rely on our software might feel similarly. There's many places to take this line of thinking to, e.g. one argument would be "well, we pay journalists precisely because we expect them to check" or "in engineering we have test-suites and can test deterministically", but I'm not sure if any of them hold up. The "the market pays for the checking" might also be true for developers reviewing AI code at some point, and those test-suites increasingly get vibed and only checked empirically, too. Super interesting to compare. |
| |
| ▲ | adamddev1 an hour ago | parent | next [-] | | Excellent observation. I get so frustrated every time I hear the "we have test-suites and can test deterministically" argument. Have we learned absolutely nothing from the last 40 years of computer science? Testing does not prove the absence of bugs. | | | |
| ▲ | boothby 5 hours ago | parent | prev | next [-] | | I look forward to a day when the internet is so uniformly fraudulent that we can set it aside and return to the physical plane. | | |
| ▲ | rkomorn 5 hours ago | parent [-] | | I don't know if I look forward to it, myself, but yeah: I can imagine a future where in person interactions become preferred again because at least you trust the other person is human. Until that also stops being true, I guess. | | |
| |
| ▲ | anonymous908213 2 hours ago | parent | prev | next [-] | | > When the same professionals advocating "don't look at the code anymore" and "it's just the next level of abstraction" respond with outrage to a journalist giving them an unchecked article. I would expect there is literally zero overlap between the "professionals"[1] who say "don't look at the code" and the ones criticising the "journalists"[2]. The former group tend to be maximalists and would likely cheer on the usage of LLMs to replace the work of the latter group, consequences be damned. [1] The people that say this are not professional software developers, by the way. I still have not seen a single case of any vibe coder who makes useful software suitable for deployment at scale. If they make money, it is by grifting and acting as an "AI influencer", for instance Yegge shilling his memecoin for hundreds of thousands of dollars before it was rugpulled. [2] Somebody who prompts an LLM to produce an article and does not even so much as fact-check the quotations it produces can clearly not be described as a journalist, either. | |
| ▲ | ffsm8 5 hours ago | parent | prev [-] | | While I don't subscribe to the idea that you shouldn't look at the code - it's a lot more plausible for devs because you do actually have ways to validate the code without looking at it. E.g you technically don't need to look at the code if it's frontend code and part of the product is a e2e test which produces a video of the correct/full behavior via playwright or similar. Same with backend implementations which have instrumentation which expose enough tracing information to determine if the expected modules were encountered etc I wouldn't want to work with coworkers which actually think that's a good idea though | | |
| ▲ | Pay08 2 hours ago | parent [-] | | If you tried this shit in a real engineering principle, you'd end up either homeless or in prison in very short order. |
|
|
|
| ▲ | sphars 6 hours ago | parent | prev | next [-] |
| Aurich Lawson (creative director at Ars) posted a comment[0] in response to a thread about what happened, the article has been pulled and they'll follow-up next week. [0]: https://arstechnica.com/civis/threads/journalistic-standards... |
| |
|
| ▲ | epistasis 7 hours ago | parent | prev | next [-] |
| Yikes I subscribed to them last year on the strength of their reporting in a time where it's hard to find good information. Printing hallucinated quotes is a huge shock to their credibility, AI or not. Their credibility was already building up after one of their long time contributors, a complete troll of a person that was a poison on their forums, went to prison for either pedophilia or soliciting sex from a minor. Some serious poor character judgement is going on over there. With all their fantastic reporters I hope the editors explain this carefully. |
| |
| ▲ | singpolyma3 6 hours ago | parent | next [-] | | TBF even journalists who interview people for real and take notes routinely quite them saying things they didn't say. The LLMs make it worse, but it's hardly surprising behaviour from them | | |
| ▲ | pmontra 2 hours ago | parent | next [-] | | I knew first hand about a couple of news in my life. Both were reported quite incorrectly. That was well before LLMs. I assume that every news is quite inaccurate, so I read/hear them to get the general gist of what happened, then I research the details if I care about them. | |
| ▲ | epistasis 5 hours ago | parent | prev [-] | | It's surprising behavior to come from Ars Technica. But also when journalists misquote it's through a different phrasing of something that Pepe have actually said, sometimes with different emphasis or eve meaning. But of the people I've known who have been misquoted it's always traceable to something they actually did say. |
| |
| ▲ | justinclift 2 hours ago | parent | prev [-] | | > Their credibility was already building up ... Don't you mean diminishing or disappearing instead of building up? Building up sounds like the exact opposite of what I think you're meaning. ;) | | |
| ▲ | zem an hour ago | parent [-] | | I think they meant it had taken a huge hit and was in the process of building up again |
|
|
|
| ▲ | trollbridge 8 hours ago | parent | prev | next [-] |
| The amount of effort to click an LLM’s sources is, what, 20 seconds? Was a human in the loop for sourcing that article at all? |
| |
| ▲ | phire 7 hours ago | parent | next [-] | | Humans aren't very diligent in the long term. If an LLM does something correctly enough times in a row (or close enough), humans are likely to stop checking its work throughly enough. This isn't exactly a new problem we do it with any bit of new software/hardware, not just LLMs. We check its work when it's new, and then tend to trust it over time as it proves itself. But it seems to be hitting us worse with LLMs, as they are less consistent than previous software. And LLM hallucinations are partially dangerous, because they are often plausible enough to pass the sniff test. We just aren't used to handling something this unpredictable. | | |
| ▲ | Waterluvian 7 hours ago | parent | next [-] | | It’s a core part of the job and there’s simply no excuse for complacency. | | |
| ▲ | jatora 7 hours ago | parent | next [-] | | There's not a human alive that isnt complacent in many ways. | | |
| ▲ | emmelaich 7 hours ago | parent [-] | | You're being way too easy on a journalist. | | |
| ▲ | nradov 6 hours ago | parent | next [-] | | And too easy on the editor who was supposed to personally verify that the article was properly sourced prior to publication. This is like basic stuff that you learn working on a high school newspaper. | |
| ▲ | jatora 3 hours ago | parent | prev [-] | | lol true |
|
| |
| ▲ | pixl97 7 hours ago | parent | prev | next [-] | | The words on the page are just a medium to sell ads. If shit gets ad views then producing shit is part of the job... unless you're the one stepping up to cut the checks. | | | |
| ▲ | intended 6 hours ago | parent | prev [-] | | This is a first degree expectation of most businesses. What the OP pointed out is a fact of life. We do many things to ensure that humans don’t get “routine fatigue”- like pointing at each item before a train leaves the station to ensure you don’t eyes glaze over during your safety check list. This isn’t an excuse for the behavior. Its more about what the problem is and what a corresponding fix should address. |
| |
| ▲ | potatoman22 7 hours ago | parent | prev | next [-] | | https://en.wikipedia.org/wiki/Automation_bias | |
| ▲ | zahlman 7 hours ago | parent | prev | next [-] | | There's a weird inconsistency among the more pro-AI people that they expect this output to pass as human, but then don't give it the review that an outsourced human would get. | | |
| ▲ | kaibee 6 hours ago | parent [-] | | > but then don't give it the review that an outsourced human would get. Its like seeing a dog play basketball badly. You're too stunned to be like "no don't sign him to <home team>". |
| |
| ▲ | vidarh 7 hours ago | parent | prev [-] | | The irony is that while from perfect, an LLM-based fact-checking agent is likely to be far more dilligent (but still needs human review as well) by nature of being trivial to ensure it has no memory of having done a long list of them (if you pass e.g. Claude a long list directly in the same context, it is prone to deciding the task is "tedious" and starting to take shortcuts). But at the same time, doing that makes it even more likely the human in the loop will get sloppy, because there'll be even fewer cases where their input is actually needed. I'm wondering if you need to start inserting intentional canaries to validate if humans are actually doing sufficiently torough reviews. |
| |
| ▲ | adamddev1 an hour ago | parent | prev | next [-] | | The problem is that the LLM's sources can be LLM generated. I was looking up some health question and tried clicking to see the source for one of the LLMs claim. The source was a blog post that contained an obvious hallucination or false elaboration. | |
| ▲ | prussia 7 hours ago | parent | prev | next [-] | | The kind of people to use LLM to write news article for them tend not to be the people who care about mundane things like reading sources or ensuring what they write has any resemblance to the truth. | |
| ▲ | kortilla 7 hours ago | parent | prev [-] | | The source would just be the article, which the Ars author used an LLM to avoid reading in the first place. |
|
|
| ▲ | usefulposter 2 hours ago | parent | prev | next [-] |
| Incredible. When Ars pull an article and its comments, they wipe the public XenForo forum thread too, but Scott's post there was archived. Username scottshambaugh: https://web.archive.org/web/20260213211721/https://arstechni... >Scott Shambaugh here. None of the quotes you attribute to me in the second half of the article are accurate, and do not exist at the source you link. It appears that they themselves are AI hallucinations. The irony here is fantastic. Instead of cross-checking the fake quotes against the source material, some proud Ars Subscriptors proceed to defend Condé Nast by accusing Scott of being a bot and/or fake account. EDIT: Page 2 of the forum thread is archived too. This poster spoke too soon: >Obviously this is massive breach of trust if true and I will likely end my pro sub if this isnt handled well but to the credit of ARS, having this comment section at all is what allows something like this to surface. So kudos on keeping this chat around. |
| |
| ▲ | bombcar 2 hours ago | parent [-] | | This is just one of the reasons archiving is so important in the digital era; it's key to keeping people honest. |
|
|
| ▲ | 0xbadcafebee an hour ago | parent | prev | next [-] |
| > How many levels of outsourcing thinking is occurring to where it becomes a game of telephone How do you know quantum physics is real? Or radio waves? Or just health advice? We don't. We outsource our thinking around it to someone we trust, because thinking about everything to its root source would leave us paralyzed. Most people seem to have never thought about the nature of truth and reality, and AI is giving them a wake-up call. Not to worry though. In 10 years everyone will take all this for granted, the way they take all the rest of the insanity of reality for granted. |
| |
| ▲ | DonHopkins 41 minutes ago | parent [-] | | American citizens are having bad health advice AND PUBLIC HEALTH POLICIES officially shoved down their throats by a man who freely and publicly admits to not being afraid of germs because he snorts cocaine off of toilet seats, appointed by another angry senile old man who recommends injecting disinfectant and shoving an ultraviolet flashlight up your ass to cure COVID. We don't have 10 years left. |
|
|
| ▲ | Lerc 4 hours ago | parent | prev | next [-] |
| Has it been shown or admitted that the quotes were hallucinations, or is it the presumption that all made up content is a hallucination now? |
| |
| ▲ | Pay08 2 hours ago | parent | next [-] | | You could read the original blog post... | | |
| ▲ | Lerc 18 minutes ago | parent [-] | | How could that prove hallucinations? It could only possibly prove that they are not. If the quotes are in the original post then they are not hallucinations. If they are not in the post they could be caused by something is not a LLM. Misquotes and fabricated quotes have existed long before AI, And indeed, long before computers. | | |
| ▲ | DonHopkins 12 minutes ago | parent [-] | | How could reading the original blog post prove hallucinations??! Now you've moved the goalposts to defending your failure to read the original blog post, by denying it's possible to know anything at all for sure, so why bother reading. So you STILL have not read the original blog post. Please stop bickering until AFTER you have at least done that bare minimum of trivial due diligence. I'm sorry if it's TL;DR for you to handle, but if that's the case, then TL;DC : Too Long; Don't Comment. |
|
| |
| ▲ | DonHopkins 16 minutes ago | parent | prev [-] | | You're as bad as the lazy incompetent journalists. Just read the post instead of asking questions and pretending to be skeptical instead of too lazy to read the article this discussion is about. Then you would be fully aware that the person who the quotes are attributed to has stated very clearly and emphatically that he did not say those things. Are you implying he is an untrustworthy liar about his own words, when you claim it's impossible to prove they're not hallucinations? |
|
|
| ▲ | neya 5 hours ago | parent | prev | next [-] |
| Ars Technica has always trash even before LLMs and is mostly an advertisement hub for the highest bidder |
|
| ▲ | 6 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | giobox 7 hours ago | parent | prev | next [-] |
| More than ironic, it's truly outrageous, especially given the site's recent propensity for negativity towards AI. They've been caught red-handed here doing the very things they routinely criticize others for. The right thing to do would be a mea-culpa style post and explain what went wrong, but I suspect the article will simply remain taken down and Ars will pretend this never happened. I loved Ars in the early years, but I'd argue since the Conde Nast acquisition in 2008 the site has been a shadow of its former self for a long time, trading on a formerly trusted brand name that recent iterations simply don't live up to anymore. |
| |
| ▲ | khannn 6 hours ago | parent | next [-] | | Is there anything like a replacement? The three biggest tech sites that I traditionally love are ArsTechnica, AnandTech(rip), and Phoronix. One is dead man walking mode, the second is ded dead, and the last is still going strong. I'm basically getting tech news from social media sites now and I don't like that. | | |
| ▲ | remh 6 hours ago | parent | next [-] | | I’ve really enjoyed 404media lately | | |
| ▲ | khannn 6 hours ago | parent [-] | | I like them too. About the only other contender I see is maybe techcrunch. Need to set an email address and browser up only for sites that require registration. |
| |
| ▲ | gtowey 6 hours ago | parent | prev | next [-] | | In my wildest hopes for a positive future, I hope disenchanted engineers will see things like this as an opportunity to start our own companies founded on ideals of honesty, integrity, and putting people above profits. I think there are enough of us who are hungry for this, both as creators and consumers. To make goods and services that are truly what people want. Maybe the AI revolution will spark a backlash that will lead to a new economy with new values. Sustainable business which don't need to squeeze their customers for every last penny of revenue. Which are happy to reinvest their profits into their products and employees. Maybe. | |
| ▲ | bombcar 2 hours ago | parent | prev [-] | | ServeTheHome has something akin to the old techy feel, but it has its own specific niche. |
| |
| ▲ | antod 7 hours ago | parent | prev | next [-] | | While their audience (and the odd staff member) is overwhelming anti AI in the comments, the site itself overall editorially doesn't seem to be. | |
| ▲ | jandrewrogers 6 hours ago | parent | prev | next [-] | | Conde Nast are the same people wearing Wired magazine like a skin suit, publishing cringe content that would have brought mortal shame upon the old Wired. | |
| ▲ | emmelaich 7 hours ago | parent | prev | next [-] | | Outrageous, but more precisely malpractice and unethical to not double check the result. | |
| ▲ | netsharc 7 hours ago | parent | prev [-] | | Probably "one bad apple", soon to be fired, tarred and feathered... | | |
| ▲ | pmontra 2 hours ago | parent | next [-] | | Scapegoats are scapegoats but in every organization the problems are ultimately caused by their leaders. It's what they request or what they fail to request and what they lack to control. | |
| ▲ | zahlman 7 hours ago | parent | prev [-] | | If Kyle Orland is about to be fingered as "one bad apple" that is pretty bad news for Ars. | | |
| ▲ | JumpCrisscross 7 hours ago | parent | next [-] | | “Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012” [1]. [1] https://arstechnica.com/author/kyle-orland/ | |
| ▲ | rectang 6 hours ago | parent | prev [-] | | There are apparently two authors on the byline and it’s not hard to imagine that one may be more culpable than the other. You may be fine with damning one or the other before all the facts are known, zahlman, but not all of us are. | | |
| ▲ | sho_hn 6 hours ago | parent [-] | | I don't read their comment as implying this. It might in fact hint at the opposite; it's far more likely for the less senior author to get thrown under the bus, regardless of who was lazy. |
|
|
|
|
|
| ▲ | llbbdd 7 hours ago | parent | prev [-] |
| Honestly frustrating that Scott chose not to name and shame the authors. Liability is the only thing that's going to stop this kind of ugly shit. |
| |
| ▲ | rectang 6 hours ago | parent [-] | | There is no need to rush to judgment on the internet instant-gratification timescale. If consequences are coming for journalist or publication, they are inevitable. We’ll know more in only a couple days — how about we wait that long before administering punishment? | | |
| ▲ | llbbdd 3 hours ago | parent [-] | | It's not rushing to judgement, the judgement has been made. They published fraudulent quotes. Bubbling that liability up to Arse Technica is valuable for punishing them too but the journalist is ultimately responsible for what they publish too. There's no reason for any publication to ever hire them again when you can hire ChatGPT to lie for you. EDIT: And there's no plausible deniability for this like there is for typos, or maligned sources. Nobody typed these quotes out and went "oops, that's not what Scott said". Benj Edwards or Kyle Orland pulled the lever on the bullshit slot machine and attacked someone's integrity with the result. "In the past, though, the threat of anonymous drive-by character assassination at least required a human to be behind the attack. Now, the potential exists for AI-generated invective to infect your online footprint." | | |
| ▲ | rectang an hour ago | parent [-] | | We do not yet know just how the story unfolded between the two people listed on the byline. Consider the possibility that one author fabricated the quotes without the knowledge of the other. The sin of inadequate paranoia about a deceptive colleague is not the same weight as the sin of deception. Now to be clear, that’s a hypothetical and who knows what the actual story is — but whatever it is, it will emerge in mere days. I can wait that long before throwing away two lives, even if you can’t. > Bubbling that liability up to Arse Technica is valuable for punishing them Evaluating whether Ars Technica establishes credible accountability mechanisms, such as hiring an Ombud, is at least as important as punishing individuals. |
|
|
|