Remix.run Logo
Ars Technica fires reporter after AI controversy involving fabricated quotes(futurism.com)
184 points by danso 6 hours ago | 102 comments
AnonC 2 hours ago | parent | next [-]

Journalists and bloggers usually write about others’ mess ups and apologies, dissecting which apologies are authentic and which apologies are non-apologies.

In this incident, Aurich Lawson of Ars Technica deleted the original article (which had LLM hallucinated quotes) instead of updating it with the error. He then published a vague non-apology, just like large companies and politicians usually do. And now we learn that this reporter was fired and yet Ars Technica doesn’t publish a snippet of an article about it.

There’s something to be said about the value of owning up to issues and being forthright with actions and consequences. In this age of indignation and fear of being perceived as weak or vulnerable due to honesty, I would’ve thought that Ars would be or could’ve been a beacon for how things should be talked about.

It’s sad to see Ars Technica at this level.

Gagarin1917 6 minutes ago | parent | next [-]

They’re at this level because the editors have always had low standers.

I don’t know about you guys, but I feel like 50% of Ars headlines are completely misleading.

They’ve had this problem for years. They will publish anything that gets them clicks. They do not care if a writer makes things up. They do not care if their headlines are misleading - in fact, that’s the point. They clearly got into the job in order to influence and manipulate people.

They’re bad people, with terrible motivations, and unchecked power. They only walk back when something really really bad happens.

Never trust an Ars headline.

14 8 minutes ago | parent | prev | next [-]

Where I work in healthcare honestly and owning up is encouraged and unless there is major negligence not often punished. They just want to try learn why the mistake happened and look for ways to prevent it going forward. My buddy said for his company if an accident happens WorkSafe is not out to punish as long as they are very forward and honest. Again they want to learn how to avoid it happening again. Punishment only scares others to try hide mistakes.

I think they missed a big opportunity to instead of firing the guy sit him down and stress how not okay this was and that it harms the credibility and he needs to understand that and make a proper apology. They could make him do some education like ethical reporting responsibilities or whatever.

Then like you say not just hide the article but point out the mistakes and corrections. Describe the mistake and how credible reporting is their priority and the author will be given further education to avoid this happening again. They could also make new policies like going forward all articles that use AI for search results must attempt to find a source for that information. This would build trust not harm it in my opinion.

petterroea an hour ago | parent | prev | next [-]

It seemed to me like very hasty self defense, there's a lot of AI slop hate and Ars can't risk becoming known as slop when their readers are probably prone to be aware of the issue.

I don't think Ars thought they had a choice but to cut off the journalist who made the mistake, especially when it was regarding a very touchy subject. I don't think they had a choice, it's impossible for us readers to know if this was a single lapse of judgement or a bad habit. Regardless, the communication should have been better.

esperent an hour ago | parent | next [-]

All they had to do was write a clear and simple message saying that one of their staff was responsible, has been fired, and they'll take steps to avoid this in future.

Their actions so far just make me think they're panicking and found a scapegoat to blame it on, but they're not going to put any new checks in place so it'll just happen again.

DetroitThrow 42 minutes ago | parent [-]

It was against their policy to use AI in producing any part of the final article, and the writer was aware of that.

I feel bad for the guy, but there's just no way I can imagine much better safeguards other than editors paying more close attention to referencing sources, and hiring more reliable people.

autoexec 12 minutes ago | parent | next [-]

> It was against their policy to use AI in producing any part of the final article, and the writer was aware of that.

More than that, as a reporter on AI he should have been fully aware that AI frequently bullshits and lies. He should have known it was not reliable and that its output needs to be carefully verified by a human if you care at all about the accuracy or quality of what it gives you. His excuse that this was done in a fever induced state of madness feels weak when it was his whole job to know that AI was not an appropriate tool for the job.

esperent 23 minutes ago | parent | prev | next [-]

Yes, those are exactly the kind of steps they would need to publicly commit to in order to retain trust. And yet, instead we get silence, no acceptance that some measure of responsibility falls on the editorial team here. So it's clear they just hope it'll blow over without them having to do anything, which is the opposite of what a trustworthy site would do.

tonyedgecombe 12 minutes ago | parent | prev [-]

You have to give them time to do the job properly as well. Companies will often pay lip service to standards then squeeze their staff so much those standards are impossible to attain.

gertop an hour ago | parent | prev [-]

AnonC doesn't seem to be upset that the journalist was fired. The disappointment comes from Ars trying to brush this entire situation away by deleting articles, comments, and making no statement on their website.

petterroea an hour ago | parent [-]

My understanding is that AnonC is upset at Ars not taking the mature approach by allowing this to become a learning moment for the employee and using it to double down and confirm their stance on AI generated content. There's strength in maturity. But I am doing some reading between the lines, and I'm possibly reading a bit too much into "There’s something to be said about the value of owning up to issues"

Reminds me of a story I was told as an intern deploying infra changes to prod for the first time. Some guy had accidentally caused hours of downtime, and was expecting to be fired, only for his boss to say "Those hours of downtime is the price we pay to train our staff (you) to be careful. If we fire you, we throw the investment out the window"

bandrami 2 minutes ago | parent | next [-]

"Make sure quotes in your article are things the subject actually said to you" is not something that should need a "learning moment".

lynx97 24 minutes ago | parent | prev [-]

There is a difference between an error and totally misunderstand your actual task. I have absolutely no sympathy for journalists getting caught producing hallucinated articles. Thats an absolute no go, and should always result in that person being fired.

jcgrillo a minute ago | parent [-]

[delayed]

vpribish an hour ago | parent | prev | next [-]

This has just happened - i'm giving Ars a bit more time to come out with a piece examining the situation. They're a pretty good operation, I think. but it they don't...

doctorpangloss an hour ago | parent | prev [-]

you're participating in a social media site where something like 20% of the articles have become, "I told Claude Code to do something and write this article about it." So put your money where your mouth is, if you think it's sad, if this is more than concern trolling, hit Ctrl+W.

aizk 2 hours ago | parent | prev | next [-]

I have a story with Benji.

Last year I went viral, and Benji was the first person to interview me. It was a really cool experience, we chatted via Twitter dms, and he wrote a piece about my work - overall did a decent job.

Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response.

Then, tech crunch wrote an article on our project.

I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)

I thought that was rather strange, especially since we already had built up a relationship.

I don't really have a moral or lesson to this story, other than that journalism can be rather opaque sometimes.

Oh one other tip for anyone reading this - if you do ever get reached out to by journalists, communicate in writing, not a phone call so you can be VERY precise in your wordings.

areoform 2 hours ago | parent | next [-]

Sometimes people get busy and overwhelmed, but they don't know how to say no.

epistasis 2 hours ago | parent [-]

I know a lot of people that don't get through their email every week, for example. Even saying no takes too much time, with the volume of communication required by daily work.

lovich an hour ago | parent | prev [-]

You're an account created after LLms were public ally available and don't have any readily available links to public accounts that verify your identity.

I am assuming that this comment is about as accurate as what got the journalist in question fired, for the same reasons.

jmward01 7 minutes ago | parent | prev | next [-]

You will never get the internet to agree on how incident x should have been handled. I think the world right now is running to figure out AI and its place. Just when you think you understand, the ground shifts. It is clear that in the future this exact use of AI will be expected and work, on average, way better than a person. I know that a lot of people probably have an emotional 'no it won't!' and disagree with me here but there have been so many 'no it won't! never!' moments passed in the last two years that I can't imagine this won't also be one. With that in mind I don't think it is reasonable to fire this journalist. They used a tool too soon but it is really hard to figure out what is too soon right now. This should have been a moment of reflection for their news room (and probably some private conversations) but it turned into a firing which I think is too much. Did the news room gain from that? Will it prevent them from doing it again? Did it fix the original mistake? I don't think the answer is 'yes' to any of these questions. A good retraction, apology, statement on how they are changing and will review new technology entering the newsroom in the future. Those help.

Gigachad 3 minutes ago | parent [-]

The problem is accountability. If your name is on the article, this is your work. If you publish an article with fabricated quotes, it’s your fault regardless of if an AI tool was used or not since you hit the button at the end to sign off on it.

geerlingguy 2 hours ago | parent | prev | next [-]

Context from earlier discussion of the article being pulled: https://news.ycombinator.com/item?id=47009949

dang 2 hours ago | parent [-]

Thanks! and indeed - here's the sequence (in the usual reverse order). If there are missing threads we can add them...

OpenClaw is dangerous - https://news.ycombinator.com/item?id=47064470 - Feb 2026 (93 comments)

An AI Agent Published a Hit Piece on Me – Forensics and More Fallout - https://news.ycombinator.com/item?id=47051956 - Feb 2026 (82 comments)

Editor's Note: Retraction of article containing fabricated quotations - https://news.ycombinator.com/item?id=47026071 - Feb 2026 (205 comments)

An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (624 comments)

AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (30 comments)

The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)

An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (951 comments)

AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (750 comments)

swyx 13 minutes ago | parent | next [-]

also how the heck do you pull all these related things, do you have a semantic/agentic search bot by now or is this all just from your head?

vpribish 44 minutes ago | parent | prev [-]

dang, we appreciate all you do. thanks

raincole 2 hours ago | parent | prev | next [-]

I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links. It's scary that it got from 'practically useless' to 'the actual google search' in less than two years.

I really don't know where the internet is heading to and how any content site can survive.

palmotea 25 minutes ago | parent | next [-]

> I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links. It's scary that it got from 'practically useless' to 'the actual google search' in less than two years.

It says things I know to be false fairly regularly. I don't keep a log or anything, but it's left an impression that it's far from reliable.

SchemaLoad 2 hours ago | parent | prev | next [-]

It's because the AI overview is most of the time directly summarising the search results rather than synthesizing an answer from internal model knowledge. Which is why it can hyperlink the sources for the facts now. Even a very dumb lightweight model can extract relevant text from articles

I just can't see how this is sustainable since they are stealing from the sources who are now getting defunded.

jrmg 3 minutes ago | parent | next [-]

You can see the fall in real time - half the sources are also dubious AI slop now and that number’s only growing :-/

Gigachad a few seconds ago | parent [-]

At work the conversation is that simultaneously everyone is using LLMs now, yet we receive virtually no traffic through them. The LLMs scrape our data, provide an answer to the user, and we see nothing from it.

raincole an hour ago | parent | prev [-]

> I just can't see how this is sustainable since they are stealing from the sources who are now getting defunded.

Yeah, that's why I said I don't know where the internet is heading to.

Kwpolska 8 minutes ago | parent | prev | next [-]

Try searching for something niche. You'll get a confidently wrong and often condescending answer.

pseudalopex an hour ago | parent | prev | next [-]

> I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links.

You would know how?

The links contradict or do not support the overviews often in my experience.

dirkc 22 minutes ago | parent | prev | next [-]

Well, I hope you take this story as a caution that you shouldn't do that in any way that can seriously compromise your career/health/finances.

deathanatos an hour ago | parent | prev | next [-]

You should be checking the links more often, IMO. I've seen it respond a number of times with content that is not supported by the citations.

While trying to find an example by going back through my history though, the search "linux shebang argument splitting" comes back from the AI with:

> On Linux and most Unix-like systems, the shebang line (e.g., #!/bin/bash ...) does not perform argument splitting by default. The entire string after the interpreter path is passed as a single argument to the interpreter.

(that's correct) …followed by:

> To pass multiple arguments portably on modern systems, the env command with the -S (split string) option is the standard solution.

(`env -S` isn't portable. IDK if a subset of it is portable, or not. I tend to avoid it, as it is just too complex, but let's call "is portable" opinion.)

(edited out a bit about the splitting on Linux; I think I had a different output earlier saying it would split the args into "-S" and "the rest", but this one was fine.)

> Note: The -S option is a modern extension and may not be available

But this, … which is it.

lucaspfeifer an hour ago | parent | prev | next [-]

It is scary but also exciting. As long as there are humans making informed decisions, there will be demand for quality sources of information. But to keep up with AI, content sites will need to raise their standards. Less intrusive ads, less superficial stuff, more in-depth articles with complex yet easily navigable structure, with layers of citations, diagrams, data, and impeccable accuracy. News articles with the technical depth of today's dissertations.

techpression an hour ago | parent [-]

For AI to steal and summarize without attribution. These sites you talk about exists today but are dying because of AI.

ajkjk 26 minutes ago | parent | prev | next [-]

I know people love to hate on the AI overviews, and I'm a person who generally hates both google and AI. But--I see them as basically good and ideal. After all most of the time I am googling something like trivial, like a simple fact. And for the last decade when I have to click into sites for the information it's some SEO spam-ridden garbage site. So I am very glad to not have to interact with those anymore.

Of course Google gets little credit for this since it was their own malfeasance that led to all the SEO spam anyway (and the horrible expertsexchange-quality tech information, and stupid recipe sites that put life stories first)... but at least there now there is a backpressure against some of the spammy crap.

I am also convinced that the people here reporting that the overviews are always wrong are... basically lying? Or more likely applying some serious negative bias to the pattern they're reporting. The overviews are wrong sometimes, yes, but surely it is like 10% of the time, not always. Probably they're biased because they're generally mad at google, or AI being shoved in their face in general, and I get that... but you don't make the case against google/AI stronger by misrepresenting it; it is a stronger argument if it's accurate and resonates with everyones' experiences.

autoexec 4 minutes ago | parent [-]

> -I see them as basically good and ideal. After all most of the time I am googling something like trivial, like a simple fact. And for the last decade when I have to click into sites for the information it's some SEO spam-ridden garbage site.

What good is it if the overviews lie some percentage of the time (your own guess is 10%) and you have to search to verify that they aren't making shit up anyway. Also, those SEO spam-ridden garbage sites google feeds you whenever you bother to look past the undependable AI summaries are mostly written by AI these days and prone to the same problem about lying which isn't helping anybody.

krige an hour ago | parent | prev | next [-]

I have seen it be utterly wrong so many times recently I'm considering permanently hiding it. For instance, googling for "Amiga twin stick games" it listed a number of old, top-down, very much single axis games like Alien Breed as examples.

croes 2 hours ago | parent | prev | next [-]

It will cycle.

Without the content site the AI overview will become useless

archagon an hour ago | parent | prev [-]

Uh, really? In my experience, at least a quarter of the info it gives me is usually manufactured or incorrect in some critical way.

In fact, if you switch to "Pro" mode, it frequently says the complete opposite of what it claimed in "Fast" mode while still being ~10-20% wrong. (Not to say it's not useful — there's no better way to aggregate and synthesize obscure information — but it should definitely not be relied on a source of anything other than links for detailed followup.)

rahimnathwani 2 hours ago | parent | prev | next [-]

The headline says Ars fired the reporter, but AFAICT the article doesn't include any facts that indicate this. All we know is that he no longer works there, and that Ars refused to provide any additional information.

Kwpolska a minute ago | parent [-]

Neither side has issued a statement about what happened, but Benj’s Bluesky post does not read like a post of someone who would have resigned due to this.

breput an hour ago | parent | prev | next [-]

As much as I respect the site and gladly financially support it, this is ultimately a failure on Ars Technica and its editors. If there are any.

If this were just some random blogger, then yes the blame is totally theirs. But this was published under the Ars Technica masthead and there should have been someone or something double checking the veracity of the contents.

That said, there are a number of Ars Technica contributors that are among the best in their fields: Eric Burger, Dan Goodin, Beth Mole, Stephen Clark, and Andrew Cunningham amongst many, so one f'up shouldn't really impugn the entire organization.

AceJohnny2 a minute ago | parent [-]

> That said, there are a number of Ars Technica contributors that are among the best in their fields

I miss Maggie Koerth & Jon Stokes

aidenn0 2 hours ago | parent | prev | next [-]

I don't know that this is what happened here, but any time there is a push to do more with less, you end up rewarding people who take shortcuts over those who do a proper job, and from the outside, it looks like journalism has a push to do more with less.

JumpCrisscross 3 hours ago | parent | prev | next [-]

“Edwards also stressed that his colleague Kyle Orland, the site’s senior gaming editor who co-bylined the retracted story, had ‘no role in this error.’”

Has Orland issued a real apology? He bylined a piece containing fraudulent quotes.

schiffern 3 hours ago | parent [-]

"I always have and always will abide by that rule to the best of my knowledge at the time a story is published."

Nothing suspicious about heavy use of qualifiers in a non-apology blanket denial. Where's the Polymarket for whether this guy has a job next month?

https://www.404media.co/ars-technica-pulls-article-with-ai-f...

JumpCrisscross 2 hours ago | parent [-]

> whether this guy has a job next month?

That’s a problem. If he really hasn’t apologized, neither he nor Ars have recognized there is a problem, which means it will happen again.

slg 2 hours ago | parent [-]

Is there something to the story that I'm missing? Why does Orland need to apologize? Edwards fabricated the quotes via AI and seemingly presented them to Orland as authentic. Orland had no reason to suspect the quotes weren't real until after publishing.

When journalists are working on a shared byline, they don't each do the same research in order to fact-check each other. There is inherently a level of trust required for collaborating like this and Edwards violated that trust.

You can say this is a failure by the editorial process for not including fact checking, but that is an organizational issue with Ars, it's not the fault of Orland for failing to duplicate the work that he believed his coauthor did.

Marsymars an hour ago | parent [-]

Yeah, consider the same thing in other domains - e.g. say you're doing some code review and the PR author is a cowoker you've had for years, and they include a comment with a link to some canonical documentation along with a verbatim quote from said doc explaining usage of something in the PR. If the quote and usage both make sense in the context, I'm not going to be habitually clicking through to the docs to verify that the quote isn't actually fabricated.

lich_king 2 hours ago | parent | prev | next [-]

I clicked through the author's earlier stories when this first made waves. I obviously had no proof, but I was pretty certain that he's been using LLMs to generate stories for a good while.

When Ars released a statement saying this was an isolated incident, my reaction was "they probably didn't look too hard". I suspect they did, in the end?

Marsymars an hour ago | parent | next [-]

In defense of that, his writing style was basically the same long before LLMs.

nsxwolf 2 hours ago | parent | prev [-]

Sad if true. I used to really enjoy reading his freelance articles in various publications pre-AI.

fp64 an hour ago | parent | prev | next [-]

Sad state of things. He did it because he was sick? That's close to claiming his dog ate the original quotes so he had to make some up.

Well, Ars Technica is already for quite some time on my ignore list, and this further solidifies its place there.

0xbadcafebee 29 minutes ago | parent | prev | next [-]

I guess Blameless Postmortems haven't arrived in journalism yet.

Pretty weird that journalism as a business still revolves around "we hired a guy to write a thing, and he's perfect. oh wait, he's not perfect? it was all his fault. we've hired a new perfect guy, so everything's good now." My dudes... there are many ways you can vet information before publishing it. I get that the business is all about "being first", but that also seems to imply "being the first to be wrong".

I feel bad for the reporters. People seem to be piling onto them like they're supposed to be superhuman, but actually they're normal people under intense pressure. People fail, it's human. But when an organization fails, it's a failure of many people, not one.

bragr 2 hours ago | parent | prev | next [-]

The headline is a bit sensational considering all we know from the reporting is that he isn't working there anymore. Fired likely, sure, but not for a fact.

gigatexal 18 minutes ago | parent | prev | next [-]

This is good. They had to distance themselves from a journalist who would do such a thing. But this is more or less on the editor I think. So let’s see if they learn from this.

Gagarin1917 11 minutes ago | parent | prev | next [-]

Are Technicas editors fabricate misleading headlines all the fucking time.

The editors are the ones ultimately responsible for what they publish. Yet they’re not taking responsibility.

ModernMech 25 minutes ago | parent | prev | next [-]

I'm very bad with names and quotes, so sometimes I'll ask ChatGPT something like "what's that famous quote Brian Kernighan said about programming language names" and it will just make shit up, when really I was thinking about Donald Knuth. But according to ChatGPT, Kernighan famously said:

  “Everyone knows that Perl is designed to make easy things easy, and hard things possible, but nobody knows why it’s called Perl.”
Which of course returns 0 results on Google.
sl0pmaestro 2 hours ago | parent | prev | next [-]

Happy to see some accountability here. Athough it's unclear why the other co-author who stamped their name on that article was retained. Maybe they just stamped their name to meet their quota of articles. In any case this follow up action makes me take arstechnica standards a bit more seriously.

vadansky 2 hours ago | parent | prev | next [-]

Good time to watch Shattered Glass.

Imagine what he could have gotten up to with LLMs.

shadowgovt an hour ago | parent | prev | next [-]

That was wise. It was an honest mistake, but a direct hit to is credibility that made not just him, but the paper, look sloppy. And in an era where people are deeply concerned about journalism pedigree.

Barrin92 2 hours ago | parent | prev | next [-]

people have said enough about the ethics of all of it but what I found even sadder is that the story made me curious to take a look at the actual piece he "investigated" with AI, it's this one (https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...) This is btw a bit more than 1k words, which takes the average American reader, not senior journalist, ~5 minutes.

This whole story involved asking Claude to mine this text for quotes, which refused because it included harassment related content, then asking ChatGPT to explain that, and so on.

That entire ordeal probably generated more text from the chatbots than just reading the few paragraphs of the blogpost. That's why I think the "I'm sick" angle doesn't matter much. This is the same brainrot as people who go "grok what does this mean" under every twitter post. It's like a schoolchild who cheats and expends more energy cheating than just learning what they're supposed to.

protocolture 37 minutes ago | parent | prev | next [-]

>The Condé Nast-owned Ars Technica

I despise Conde Nast

Revanche1367 3 hours ago | parent | prev | next [-]

So the original blogger got slandered by an LLM agent, then got slandered again by a human journalist who used an LLM agent to write the article about him getting slandered by an LLM agent? How ironic.

But, does that mean he got slandered twice by an LLM agent or once by an agent and once by a human? Or was he technically slandered 3 times? Twice by agents and a third time by the journalist? New questions for the new agentic society.

sparky_z 2 hours ago | parent | next [-]

He was only slandered once, by the LLM Agent. The Ars Technica article had presented paraphrases that it falsely attributed as direct quotes, and was therefore factually incorrect reporting. But it was not defamatory by any reasonable standard. Slander isn't just a synonym of "lie".

Revanche1367 4 minutes ago | parent [-]

I wasn’t using the word in a legal sense, poindexter. I didn’t pretend to be a lawyer either. Slander in the colloquial sense is whatever the person doesn’t want attributed to them and is often used as synonym for a lie.

Besides, I am sure you could tell it was just a joke but needed to be pedantic for no reason other than feel smart?

zarflax 2 hours ago | parent | prev | next [-]

No, the journalist came in and slandered the LLM Twice and Jim Fell.

gdulli an hour ago | parent [-]

"Who are you, and how did you lose your job?"

"I'm an AI reporter. And, I'm an AI reporter."

amstan 2 hours ago | parent | prev [-]

4 times, you forgot the owner of the bot that did the PR.

Revanche1367 2 minutes ago | parent [-]

Indeed, you’re right.

add-sub-mul-div 3 hours ago | parent | prev | next [-]

> senior AI reporter

A true "senior" AI reporter should be more skeptical of LLM output than anyone else.

zmmmmm 3 hours ago | parent | next [-]

I think that's the nail in the coffin. Most others could say it was a giant whoopsie, but here it goes to the heart of their credibility. How could they continue write authoritatively about AI, having done this.

amarant 3 hours ago | parent | prev [-]

I dunno. If AI doesn't write your articles, are you even an AI reporter?

Sorry, I never could resist a good dad joke

sl0pmaestro 2 hours ago | parent | prev | next [-]

> while working from bed with a fever and very little sleep," he "unintentionally made a serious journalistic error" as he attempted to use an "experimental Claude Code-based AI tool" to help him

Oh right, being ill is what caused the error. I can bet that if you start verifying the past content from this author, you will see similar AI slop. Either that or he has been always ill with very little sleep.

jackyli02 2 hours ago | parent | prev | next [-]

The role "reporter" deserves very little credence in AI now. The public might be better off if they get their information on AI from ChatGPT.

3eb7988a1663 an hour ago | parent [-]

The core story is literally about how AI made up facts. The solution is more of the same?

jmyeet 2 hours ago | parent | prev | next [-]

The crazy part to me is that even here on HN there are people who still insist that LLMs don't fabricate things or otherwise lie.

I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.

Sure there is such a thing as a naysayer but there are also people think all forms of valid criticism are just naysaying.

protocolture 32 minutes ago | parent | next [-]

>I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.

NFT protocol doesnt really care what the payload is. NFT purveyors likewise dont care what their payload is, as long as they could use the term "NFT".

NFT's are great for certain use cases (Crypto Kitties is still around I believe) but there was never a single moment I considered that owning a weird ape jpeg, even if it was somehow, properly owned by me, would be worth millions of dollars or whatever. Its like trying to sell a "TCP".

That said, future blockchain applications will probably still rely on NFT's in some fashion. Just not the protocol as product weirdness we got for a few years there.

weird-eye-issue an hour ago | parent | prev | next [-]

I've never seen anyone here claim that AI never hallucinates or can provide incorrect information.

gertop an hour ago | parent | prev [-]

I've not heard many people claim that LLMs don't hallucinate, however I have seen people (that I previously believed to be smart):

1. Believe LLMs outright even knowing they are frequently wrong

2. Claim that LLMs making shit up is caused by the user not prompting it correctly. I suppose in the same way that C is memory safe and only bad programmers make it not so.

neya 2 hours ago | parent | prev | next [-]

[flagged]

dang 2 hours ago | parent | next [-]

Would you please stop breaking the site guidelines? I just had to ask you this in a different context.

You may not owe your least favorite publications better, but you owe this community better if you're participating in it.

https://news.ycombinator.com/newsguidelines.html

neya 23 minutes ago | parent | next [-]

> I just had to ask you this in a different context.

Sorry, I just searched my comment history, maybe I missed it? Was it recent?

kittikitti 2 hours ago | parent | prev [-]

"Don't feed egregious comments by replying; flag them instead."

You probably wish everyone would post as bots do, without em—dashes of course.

dang 2 hours ago | parent [-]

Sorry but I don't follow

apparent 2 hours ago | parent | prev [-]

Can you elaborate? Perhaps I haven't noticed that they push pro-sponsored content (what does this mean, exactly?). I do find their comment section to be pretty lousy, and very partisan. But the tech coverage always seemed fair enough. What am I missing?

neya 21 minutes ago | parent [-]

If you feed their articles into a python script that identifies biases, subtle upsells and advertorials, you will see bunch of it is exactly just promotional marketing for some companies. They also almost never report the news, just opinions of it.

ab_testing 3 hours ago | parent | prev [-]

So they fired that author after the author had publicly apologized on Blue sky.

somenameforme 3 hours ago | parent | next [-]

He was supposed to be their "Senior AI Reporter." Him including basically anything from LLMs, without verifying it, in articles not only demonstrates a complete lack of credibility as a writer, but also a complete lack of understanding of AI. Even if they might have personally wanted to keep him on, you just can't after something like this.

bingaweek 3 hours ago | parent | prev | next [-]

What is the connection between these two statements? Are we supposed to presume that someone who apologizes on Bluesky should never be fired? Or did you also read the article and thought this was important information?

landl0rd 2 hours ago | parent | prev | next [-]

The raison d’etre for the journalist, in AD 2026, is less to gather information than to verify it. The journalist who cannot be trusted is no journalist at all. He is a blogger.

danso 3 hours ago | parent | prev | next [-]

Why would apologizing for plagiarism and fabrication preclude you from facing sanctions for plagiarism and fabrication?

skygazer an hour ago | parent [-]

Is it “plagiarism” to misattribute hallucinated quotes? Not that a whole lot of sloppy, unprofessional shortcuts weren’t taken, but plagiarism doesn’t seem like the right word, as quotes are almost definitionally not plagiarism. But maybe these were paraphrasings masquerading as quotes, so maybe that’s the difference.

gdulli an hour ago | parent [-]

"Slop" and "hallucinate" have meanings outside of AI too, but it's easier to repurpose existing words than come up with a whole new lexicon for AI failure modes.

coldtea 3 hours ago | parent | prev | next [-]

"Apologized on Blue Sky" is absolutely no reason to keep them. The author did the absolutely worst things a journalist can do (short of actual corruption) and is unfit for the job:

- He didn't care for his story,

- he didn't care to verify his story,

- he published bullshit made up stuff,

- and put words in a real person's mouth

- and he didn't even care to write the thing himself

Why keep him and pay him? What mentality all the above show? What respect, both self respect and respect for the job?

If they wanted stories from an LLM, they can pay for a subscription to one directly.

Hope this sends a message to journalist hacks who offload their writing or research to an LLM.

bigyabai 3 hours ago | parent | prev | next [-]

Can you name any other way for Ars Technica to handle this situation without permanently soiling their reputation?

Marsymars an hour ago | parent [-]

That's the thing. I feel kinda bad for Benj, I don't wish him ill, and maybe he keeps writing on his own site and/or other places, but I don't see any way that he could have kept writing for Ars.

bandrami 2 hours ago | parent | prev [-]

That absolutely should be career-ending for a journalist, apology or no