| ▲ | the_af 8 hours ago | ||||||||||||||||
They cannot even claim they weren't aware of the danger. LLM hallucinations have been a discussed topic, not some obscure failure mode. Almost every article on problems with AI mentions this. So the judge was lazy, incompetent, or both. | |||||||||||||||||
| ▲ | ghywertelling 5 hours ago | parent | next [-] | ||||||||||||||||
Or she was conniving like Skylar in Breaking Bad as she convinced the investigator that she got hired because she seduced the owner. | |||||||||||||||||
| ▲ | nerdjon 6 hours ago | parent | prev | next [-] | ||||||||||||||||
I do think that for this particular situation we need to step outside of our tech bubble a little bit. I am still having regular conversations with people that either don't know about hallucinations or think they are not a big problem. There is a ton of money in these companies pushing that their tools are reliable and its working for the average user. I mean there are people that legitimately think these tools are conscious or we already have AGI. So I am not fully sure if I would jump too quick to attack the judge when we see the marketing we are up against. | |||||||||||||||||
| |||||||||||||||||
| ▲ | lukan 8 hours ago | parent | prev [-] | ||||||||||||||||
Not just discussed, but under every chat interface explicitely mentioned "This tool can make misstakes" (Sure, more honest would be "this tool makes stuff up in a convincing way") | |||||||||||||||||
| |||||||||||||||||