Remix.run Logo
codegladiator 2 days ago

> She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source", the high court wrote

I don't think the intention matters here. Its the same deal with every profession using llm to "automate" their work. The onus in on the professional, not the llm. Arstechnica case could have been justified by same manner otherwise.

Not knowing the law isnt execuse to break law, so why is not knowing the tool an excuse to blame the tool.

fidotron 2 days ago | parent | next [-]

Using an LLM to automate is simply the newer cheaper outsourcing with much of the same entertainment, but less food poisoning and air travel.

Over the last 20 years a lot of engineering (proper eng, not software) work in the west has been outsourced to cheaper places, with the certified engineers simply signing off on the work done elsewhere. This results in a cycle of doing things ever faster/more cheaply and safeguards disappearing under the pressure to go ever cheaper and faster.

As someone else pointed out, LLMs have just really exposed what a degraded state we have headed into rather than being a cause of it themselves. It's going to be very tough for people with no standards - they'll enjoy cheap stuff for a while and then it will all go away. Surprised Pikachu faces all round.

(I'm pro AI btw, just be responsible.)

Sharlin 2 days ago | parent | next [-]

LLMs also solve the timezone and language challenges. Sadly one problem that remains is that they too tell you they have understood something even if they haven't.

mtrovo 2 days ago | parent | prev [-]

At least that's the story LLM labs leaders wanna tell everyone, just happen to be a very good story if you wanna hype your valuation before investment rounds.

Working with LLM on a daily basis I would say that's not happening, not as they're trying to sell it. You can get rid of a 5 vendor headcount that execute a manual process that should have been automated 10 years ago, you're not automating the processes involving high paying people with a 1% error chance where an error could cost you +10M in fines or jail time.

When I see Amodei or Sam flying on a vibe coded airplane is the day I believe what they're talking about.

fidotron 2 days ago | parent [-]

Aero software people are not highly paid. It's a travesty.

kingstnap 2 days ago | parent | prev | next [-]

> excuse to blame the tool

The issue is ultimately blaming people doesn't really solve things. Unless its genuinely a one-of-a-kind case. But if this happened once its probably going to happen again, and this isn't the first such case of LLM hallucinations in law.

It's weird to think this way, because its easy to just point at a person for a specific instance, but when you see something repeat over and over again you need to consider that if your ultimate goal is to stop something from happening you have to adjust the tools even if the people using them were at fault in every case.

2 days ago | parent [-]
[deleted]
RobotToaster 2 days ago | parent | prev | next [-]

Intentionality normally has to be taken into account in common law countries.

That doesn't mean she hasn't done something wrong, but obviously it's more serious to do something intentionally than it is to do it carelessly or recklessly.

2 days ago | parent | prev | next [-]
[deleted]
the_af 2 days ago | parent | prev | next [-]

They cannot even claim they weren't aware of the danger. LLM hallucinations have been a discussed topic, not some obscure failure mode. Almost every article on problems with AI mentions this.

So the judge was lazy, incompetent, or both.

ghywertelling 2 days ago | parent | next [-]

Or she was conniving like Skylar in Breaking Bad as she convinced the investigator that she got hired because she seduced the owner.

nerdjon 2 days ago | parent | prev | next [-]

I do think that for this particular situation we need to step outside of our tech bubble a little bit.

I am still having regular conversations with people that either don't know about hallucinations or think they are not a big problem. There is a ton of money in these companies pushing that their tools are reliable and its working for the average user.

I mean there are people that legitimately think these tools are conscious or we already have AGI.

So I am not fully sure if I would jump too quick to attack the judge when we see the marketing we are up against.

the_af 2 days ago | parent [-]

I find it hard to believe the people who use AI haven't read a single article about AI. That would also disqualify this judge, if it were true.

This exceeds the tech bubble.

My local newspaper, completely clueless about tech, runs an article about AI trouble, hallucinations and whatnot every other week. Completely missing most of the nuances, of course, but my point is that this has entered the public discourse.

nerdjon 2 days ago | parent [-]

It may have entered public discourse but it is not being talked about as much outside of tech spaces, and we are up against the companies pushing the complete opposite narrative.

All I can say is that I am having conversations with non technical people regularly that are not aware of the issue or think it is a largely solved issue.

lukan 2 days ago | parent | prev [-]

Not just discussed, but under every chat interface explicitely mentioned "This tool can make misstakes"

(Sure, more honest would be "this tool makes stuff up in a convincing way")

amrocha 2 days ago | parent [-]

It’s well understood that humans do not instinctively grasp statistics, are bad at knowing when they’re being lied to, and are hard wired to take shortcuts.

AI companies gave everyone a button that does their job for them 99.9% of the time. And then 0.1% of the time it gets them fired. That’s irresponsible, no matter how many disclaimers you add to the bottom of the screen.

hypeatei 2 days ago | parent | prev | next [-]

This is why LLMs won't replace humans wholesale in any profession: you can't hold a machine accountable. Most of the chatbot experiences I have with various support channels always end up with human intervention anyway when it involves money.

Maybe true general intelligence would solve these issues, but LLMs aren't meeting that threshold anytime soon, imo. Stochastic parrots won't rule the world.

direwolf20 2 days ago | parent | next [-]

This is exactly why LLMs will replace humans: even if the work is crap, nobody will be accountable for the crap work, and it saves money.

delecti 2 days ago | parent [-]

Work where "crap" is an acceptable level of quality is work that probably doesn't need to be done.

So I think it's more likely that LLMs unravels the "bullshit jobs" entirely, rather than replacing them with crap. Once people realize it didn't matter if the output sucked, they'll realize the output wasn't needed in the first place.

lazide 2 days ago | parent | prev [-]

Even ‘true general intelligence’ (if we count humans as that) screws up frequently, sometimes (often?) intentionally for it’s own benefit - which is why accountability is such a necessary element.

If someone won’t be held liable for the end result at some point, then there is no reason to ensure an even somewhat reasonable end result. It’s fundamental.

Which is also why I suspect so many companies are pushing ‘AI’ so hard - to be able to do unreasonable things while having a smokescreen to avoid being penalized for the consequences.

hypeatei 2 days ago | parent [-]

> to be able to do unreasonable things while having a smokescreen

Maybe, but I feel like the calculus remains unchanged for professions that already lack accountability (police, military, C-suite, three letter agencies, etc.); LLMs are yet another tool in their toolbox to obfuscate but they were going to do that anyway.

Peons will continue to face consequences and sanctions if they screw up by using hallucinated output.

lazide 2 days ago | parent [-]

all of those professions definitely have accountability - per the nominal rules of the system. Often extremely severe accountability.

The actual systems do everything they can to avoid that accountability, including often violating the rules themselves, or corrupting enforcement, for exactly the reasons why corporations are trying to avoid accountability too.

Accountability is expensive, and way less convenient than doing whatever you want whenever you want.

delaminator 2 days ago | parent | prev [-]

> Not knowing the law isn't excuse to break law,

Yeah, about that ...

https://metro.co.uk/2016/07/03/rapist-struck-again-after-dep...

> A Somalian rapist who had his deportation overturned went on to rape two more women after he was freed.

> But he had his deportation overturned after serving his time because he didn’t know it was unacceptable in the UK.