Remix.run Logo
the_af 8 hours ago

They cannot even claim they weren't aware of the danger. LLM hallucinations have been a discussed topic, not some obscure failure mode. Almost every article on problems with AI mentions this.

So the judge was lazy, incompetent, or both.

ghywertelling 5 hours ago | parent | next [-]

Or she was conniving like Skylar in Breaking Bad as she convinced the investigator that she got hired because she seduced the owner.

nerdjon 6 hours ago | parent | prev | next [-]

I do think that for this particular situation we need to step outside of our tech bubble a little bit.

I am still having regular conversations with people that either don't know about hallucinations or think they are not a big problem. There is a ton of money in these companies pushing that their tools are reliable and its working for the average user.

I mean there are people that legitimately think these tools are conscious or we already have AGI.

So I am not fully sure if I would jump too quick to attack the judge when we see the marketing we are up against.

the_af 6 hours ago | parent [-]

I find it hard to believe the people who use AI haven't read a single article about AI. That would also disqualify this judge, if it were true.

This exceeds the tech bubble.

My local newspaper, completely clueless about tech, runs an article about AI trouble, hallucinations and whatnot every other week. Completely missing most of the nuances, of course, but my point is that this has entered the public discourse.

nerdjon 5 hours ago | parent [-]

It may have entered public discourse but it is not being talked about as much outside of tech spaces, and we are up against the companies pushing the complete opposite narrative.

All I can say is that I am having conversations with non technical people regularly that are not aware of the issue or think it is a largely solved issue.

lukan 8 hours ago | parent | prev [-]

Not just discussed, but under every chat interface explicitely mentioned "This tool can make misstakes"

(Sure, more honest would be "this tool makes stuff up in a convincing way")

amrocha 5 hours ago | parent [-]

It’s well understood that humans do not instinctively grasp statistics, are bad at knowing when they’re being lied to, and are hard wired to take shortcuts.

AI companies gave everyone a button that does their job for them 99.9% of the time. And then 0.1% of the time it gets them fired. That’s irresponsible, no matter how many disclaimers you add to the bottom of the screen.