Remix.run Logo
AI on Australian travel company website sent tourists to nonexistent hot springs(cnn.com)
82 points by breve 6 hours ago | 36 comments
0xC0ncord 3 hours ago | parent | next [-]

>Scott Hennessey, the owner of the New South Wales-based Australian Tours and Cruises, which operates Tasmania Tours, told the Australian Broadcasting Network (ABC) earlier this month that “our AI has messed up completely.”

To me this is the real takeaway for a lot of these uses of AI. You can put in practically zero effort and get a product. Then, when that product flops or even actively screws over your customers, just blame the AI!

No one is admitting it but AI is one of the easiest ways to shift blame. Companies have been doing this ever since they went digital. Ever heard of "a glitch in the system"? Well, now with AI you can have as many of those as you want, STILL never accept responsibility, and if you look to your left and right, everyone is doing it, and no one is paying the price.

benjedwards an hour ago | parent | next [-]

Yes, it's a big problem. I call it "agency laundering" and I first mentioned it in this article last year: https://arstechnica.com/information-technology/2025/08/is-ai...

Treating AI models as autonomous minds lets companies shift responsibility for tech failures.

flakeoil 33 minutes ago | parent | prev | next [-]

> No one is admitting it but AI is one of the easiest ways to shift blame.

Similar to what Facebook, Google, Twitter/X, Tiktok etc have been doing for a long time using the platform-excuse. "We are just a platform. We are not to blame for all this illegal or repugnant content. We do not have resources to remove it."

yojo an hour ago | parent | prev | next [-]

It sounds like in this case there was some troll-fueled comeuppance.

> “We’re not a scam,” he continued. “We’re a married couple trying to do the right thing by people … We are legit, we are real people, we employ sales staff.”

> Australian Tours and Cruises told CNN Tuesday that “the online hate and damage to our business reputation has been absolutely soul-destroying.”

This might just be BS, but at face-value, this is a mom and pop shop that screwed up playing the SEO game and are getting raked over the internet coals.

Your broader point about blame-washing stands though.

ambicapter 36 minutes ago | parent | next [-]

That's the thing about scammers, they operate in plausibly deniable ways, like covering up malice with incompetence. They make taking things at face value increasingly costly for the aggrieved.

scblock 27 minutes ago | parent | prev [-]

No, this is earned. They chose to do this, to publish lies, and have to live with the consequences.

pjc50 2 hours ago | parent | prev | next [-]

There's a book "The Unaccountability Machine" that HN may be interested in. Takes a much broader approach across management systems.

stuaxo an hour ago | parent | prev | next [-]

Commercial enterprises seem designed to launder responsibility, this is perhaps the ultimate version of that system.

nicbou 2 hours ago | parent | prev | next [-]

I hope that this will result in people paying a premium for human curation and accountability, but I won't hold my breath.

ehnto 3 hours ago | parent | prev [-]

I somewhat disagree, because at the end of the day he still has to take responsibility for the fuckup and that will matter in terms of dollars and reputation. I think this is also why a lot of roles just won't speed up that much, the bottleneck will be verification of outputs because it is still the human's job on the line.

An on the nose example would be, if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong? Customers are going to feel the same way, AI or human, you (the company, the employee) messed up.

caminante 2 hours ago | parent | next [-]

> dollars and reputation

You're not already numb to data breaches and token $0.72 class action payouts that require additional paperwork to claim?

In this article, these people did zero confirmatory diligence and got an afternoon side trip out of it. There are worse outcomes.

add-sub-mul-div 2 hours ago | parent | prev [-]

> if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong?

He was likely the one who ordered the use of the AI. He won't fire you for mistakes in using it because it's a step on the path towards obsoleting your position altogether or replacing you with fungible minimum wage labor to babysit the AI. These mistakes are an investment in that process.

He doesn't have to worry about consequences in the short term because all the other companies are making the same mistakes and customers are accepting the slop labor because they have no choice.

merelysounds an hour ago | parent | prev | next [-]

In case anyone else is curious, I just entered the following in chatgpt: "Without searching the internet, do you know how to get to weldborough hot springs?"

> Yeah—roughly, from general local knowledge (no web searching, promise ). I’ll flag where my memory might be fuzzy.

> Weldborough Hot Springs are in northeast Tasmania, near Weldborough Pass on the Tasman Highway (A3) between Scottsdale and St Helens.

Screenshot with more: https://postimg.cc/14TqgfN4

doodpants 2 hours ago | parent | prev | next [-]

“our AI has messed up completely.”

No, it worked as designed. Generative AI simply creates content of the type that you specify, but has no concept of truth or facts.

simianwords 10 minutes ago | parent [-]

this is incorrect. it has the concept of truth and facts.

pjc50 3 hours ago | parent | prev | next [-]

New variant on "I followed my satnav blindly and now I'm stuck in the river", except less reliable.

It is however fraud on the part of the travel company to advertise something that doesn't exist. Another form of externalized cost of AI.

buran77 3 hours ago | parent | next [-]

> It is however fraud on the part of the travel company to advertise something that doesn't exist

Just here to point out that from a legal perspective, fraud is deliberate deception.

In this case a tourist agency outsourced the creation of their marketing material to a company who used AI to produce it, with hallucinations. From the article it doesn't look like either of the two companies advertised the details knowing they're wrong, or had the intent to deceive.

Posting wrong details on a blog out of carelessness and without deliberate ill intention is not fraud more than using a wrong definition of fraud is fraud.

tantalor 2 hours ago | parent | next [-]

The standard is to add disclaimers like "Al responses may include mistakes." The chatbot they used to generate that text would have mentioned that.

Everybody knows AI makes stuff up. It's common knowledge.

To omit that disclaimer, the author needs to take responsibility for fact checking anything they post.

Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.

buran77 23 minutes ago | parent [-]

> To omit that disclaimer, the author needs to take responsibility for fact checking anything they post.

> Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.

Couldn't help but notice you gave some very convincing legal advice without any disclaimer that you are not a lawyer, a judge, or an expert on Australian law. Your own litmus test characterizes you as a fraudster. The other mandatory components of fraud (knowledge, intention, damages) don't even apply, you said so.

Australian law isn't at all weird about this. Their definition (simplified) pivots on intentional deception, to obtain gains or to cause loss to others, knowing the outcome.

tantalor 22 minutes ago | parent [-]

(IANAL)

f33d5173 2 hours ago | parent | prev [-]

There has to be a clause for "willful disregard for the truth", no? Having your lying machine come up with plausible lies for you and publishing them without verification is no better than coming up with the lies yourself. What really protects them from fraud accusations is that these blog posts were just content marketing, they weren't making money off of them directly.

direwolf20 an hour ago | parent [-]

And using autocomplete to write travel advertisements has to fall under this category?

Lerc 3 hours ago | parent | prev | next [-]

Seems like closer to fraud on behalf of the marketing company they outsourced to.

I doubt they commissioned articles on things that don't exist. If you use AI to perform a task that someone has asked you to do, it should be your responsibility to ensure that it has actually done that thing properly.

alpinisme 3 hours ago | parent | prev [-]

The consequences for wrong ai need to be a lot higher if we want to limit slop. Of course, there’s space for llms and their hallucinations to contribute meaningful things, but we need at least a screaming all caps disclaimer on content that looks like it could be human-generated but wasn’t (and absent that disclaimer or if the disclaimer was insufficiently prominent, false statements are treated as deliberate fraud)

verytrivial 42 minutes ago | parent | prev | next [-]

I binged ST:NG before it went away again on Netflix. The more I heard from Data, the more he sounded like where Ai should be heading: Quick, thorough reasoning but followed by explicit, tagged verification from external ground truth.

There needs to be a more meta, layered approach to reason. Different personalities viewing the output with different hats on: "That's a bold claim, champ. Search required." But I guess the current real-time, interactive nature of these systems makes it difficult to justify.

mettamage 9 minutes ago | parent | prev | next [-]

This is why I don't really believe in agentic AI.

Not with the current state of technology. I haven't seen that it works yet. It requires supervision.

It's funny, back in the day computer calculations were checked with human computers. But now? Just trust it bro.

voidUpdate 2 hours ago | parent | prev | next [-]

How often do you have to update your page on "what's in a town" to "compete with the big boys"? Seems like you could just google what's in the town, or visit if you really want to make sure, rather than just asking your favourite LLM "What's there to do in Weldborough"?

lm28469 an hour ago | parent | next [-]

> Seems like you could just google what's in the town

You'll still get an AI generate answer at the top, followed by 3 AI generated sponsored blog scams, etc.

zwog 2 hours ago | parent | prev | next [-]

You probably need to update every now and the because of SEO and such.

nicbou an hour ago | parent | prev [-]

The goal is to attract search traffic to your page, so that you can promote your product or your brand. AI is making this a lot cheaper than before because you don't even need to create the content, but it's also killing the overall amount of traffic to all websites.

If you actually take pride in your work, it's a double whammy of competing with AI slop and losing over half of your traffic to AI summaries.

Useful independent websites are so cooked.

metalman 2 hours ago | parent | prev | next [-]

has anyone checked to see if the AI included time co ordinates as well? it might be that AI is missunderstanding our tempotral limitations, and if prompted correctly will provide a handy portal to when, there will in fact be hot springs at the location suggested.

testing22321 an hour ago | parent [-]

It seems very likely if you go back in time far enough the region was very hot. Something around 4.5 billion years should do it.

jmyeet an hour ago | parent | prev | next [-]

I love stories like this because there are still allegedly tech-savvy people who will insist that AIs don't lie, don't hallucinate and rarely if ever make errors.

At the end of the day, LLMs are a statistical approximation or projection.

A good example of this is how LLMs struggle with multiplication, particularly multipolcation of large numbers. It's not just that they make mistakes but the nature of the results.

Tell ChatGPT to multiply 129348723423 and 2987892342424 and it'll probably get it wrong because nowhere on Reddit is that exact question for it to copy. But what's interesting is it'll tend to get the first and large digits correct (more often than not) but the middle is just noise.

Someone will probably say "this is a solved problem" because somebody, somewhere has added this capability to a given LLM but these kinds of edge cases I think will constantly expose the fundamental limits of transformers, just like the famous "how many r's in strawberry?" example that di the rounds.

All this comes up when you tell LLMs to write legal briefs. They completely make up a precedent because they learn what a precedent looks like and generate something similar. Lawyers have been caught submitting fake precedents in court filings due to this.

simianwords 11 minutes ago | parent [-]

> Tell ChatGPT to multiply 129348723423 and 2987892342424 and it'll probably get it wrong because nowhere on Reddit is that exact question for it to copy. But what's interesting is it'll tend to get the first and large digits correct (more often than not) but the middle is just noise.

People have no idea how capable LLM's are and confidently write these kind of things.

nephihaha 5 hours ago | parent | prev | next [-]

Weldborough seems to have done well out of it either way.

re-thc 3 hours ago | parent | prev [-]

Australia has drop bears anyhow. Do they exist?

Seems par for course.