Remix.run Logo
crop_rotation 4 hours ago

HN is full of people saying ABCD should know better and honestly I thought the same, but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly. People get defensive when I point out out to them that ChatGPT will make things up and it is widely know, and some even tell me it is the fault of "tech people" for not fixing it and they can't be expected to double check every chatgpt conversation. So I am very sure this problem is more prevalent than what we see and also that it is going to continue increasing.

WarmWash 3 hours ago | parent | next [-]

Every single person, every one of them, that I have watched google something since AI overviews launched, will instantly reference the AI overview. And that model is some bottom-rung high volume model, not even gemini.

jacquesm an hour ago | parent [-]

The best way to deal with that is to kick the AI overview off using your browser.

jacquesm an hour ago | parent | prev | next [-]

Yes, this is the problem. You give people something that has an oracular interface they will treat it like an oracle.

andrewflnr 4 hours ago | parent | prev | next [-]

Your friends should know better. That their behavior is prevalent does not contradict that.

coffeefirst an hour ago | parent | next [-]

This answer really isn’t good enough. The providers can’t both aim to replace search and claim PhD level intelligence that will do all the jobs, but hide behind “it makes mistakes” in small print.

andrewflnr 26 minutes ago | parent [-]

I'm not making excuses for the providers either. But seeing through the inflated claims of commercial service providers is not a new skill.

crop_rotation 3 hours ago | parent | prev [-]

Yes and the world should be utopia and everyone should be happy and we all wish for world peace and yada yada yada. What you are saying is a vision of ideal world as it should be, but doesn't help anyone understand the real world problems.

andrewflnr 3 hours ago | parent | next [-]

You can't seriously compare the problem of world peace with the problem of exercising the most basic level of critical thinking w.r.t. LLM output after it has already proven itself unreliable. That's not a utopian dream, it's a level of prudence on par with not sticking a fork in an electrical socket.

ffsm8 3 hours ago | parent | next [-]

You're seriously overestimating the average person's ability to understand what llms are.

Look at all the influences, streamers, podcasters constantly asking em things and taking it as fact - live.

Isn't the joe Rogan experience like the most watched podcast or something? Every episode I've ever stumbled upon he "fact checks" multiple things via their sponsor which is just an llm provider specialized on news.

People aren't good at statistics. If something is close enough to the truth enough times, and talks authoritively on everything with good English... Guess what, they're gonna trust it.

andrewflnr 23 minutes ago | parent [-]

You don't need to know how an LLM works to realize "sometimes the magic ChatGPT box tells me wrong things". Even if you fully fall for the anthropomorphism, this only requires the same level of awareness as realizing that after the third or fourth thing your weird uncle tells you that turns out not to be true, maybe you shouldn't take him at his word.

jacquesm an hour ago | parent | prev | next [-]

I would happily bet that you too have fallen for this at least once. Unless you cut AI out of your life completely and do not interact with others.

AI output is like that COVID video of contamination, you almost can't avoid it unless you scrupulously check each and every thing that is presented as fact that you are exposed to. And absolutely nobody does that.

andrewflnr 19 minutes ago | parent [-]

> Unless you cut AI out of your life completely

Pretty close. I only touched ChatGPT a couple times a few years ago, haven't used the others (on purpose at least. Google forces its Gemini summaries on me but I mostly avoid them, because, umm, see above.)

> and do not interact with others.

Most people I interact with are on the same page about AI. But I try to keep my critical thinking online anyway, like I always have. If someone tried to feed me AI slop, I would consider that person to have betrayed my trust and would, to put it gently, try to interact with them less.

philipov 2 hours ago | parent | prev | next [-]

You may demand that of yourself, but for others we must design around the fact that they are stupid. You do not have the power to change their stupidity, only your response to it.

andrewflnr 9 minutes ago | parent [-]

Indeed. I'm not sure why you think that's responsive to my post. I'm mostly pointing out just how deeply stupid they are.

Though if you have a useful response besides "weather the storm while everyone else learns the hard way", I'm listening.

bryanrasmussen 3 hours ago | parent | prev [-]

yes but the electrical socket in question is a fairly new-fangled one, who doesn't want to fork-test it a bit.

ath3nd 3 hours ago | parent | prev [-]

[dead]

friedtofu 3 hours ago | parent | prev | next [-]

I think this is an issue with anyone who relies on any LLMs. But yeah I agree and have had similar issues where someone will get defensive because they just don't want to admit they(the LLM's response) were wrong. It's hard to tell someone in a "nice/nonchalant" way:

"It's fine, the LLM just lied to you, but hallucinations and making claims based off of assumptions is just something they do and always have done!"

People don't like to feel dumb, and they don't want to feel betrayed by the same tool that gave them incredible factually correct results that one time only to give them complete and utter bullshit(that sounded legitimate) another time.

Also, yeah it feels like its everywhere these days and isn't showing any signs of slowing down(visited my parents and my dads using siri to ask chatgpt stuff now - URGHHHH) and I really hope we're both wrong

joe_mamba 4 hours ago | parent | prev | next [-]

>but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly

That's why I lost trust and faith in people who end up in positions of doctor, lawyer or judge. When I was young I used to think they must be the smartest most high-IQ people in society, having read the most books and have the highest levels of critical thinking and debate skills ever. When in fact they were only good at memorizing and regurgitating the right information that the school required to pass the exam that gave them that prestigious title and that's it.

Now in my mid 30's when I talk to people from these professions at a beer, barbeque or any other casual gathering, I realize they're really not that sharp or well read or immune propaganda and misinformation, and anyone could be in their place if they put in the grind work at the right time. It's a miracle our society functions at all.

pessimizer 2 hours ago | parent | prev | next [-]

> almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly.

We do not live in a meritocracy, because society has no means to judge merit. We live in a society ruled by people who crammed before the tests, and who wrote the papers to agree with and flatter the teacher. Now they are the teachers (and bosses), and

1) expect to be flattered (and LLMs have been built as the ultimate flatterers),

2) feel that a good, ambitious student (or subordinate) will not question them and their work, but instead learn to conform to it, and

3) are not particularly interested in the quality of their work as such, but rather the acceptance of their work. In certain professions, such as judges, doctors, high-level lawyers and engineers, or politicians, they feel like (with good reason) that they can demand acceptance of their work, and punish those who don't accept it.

This position is what they worked so hard as young people for. They were not working to become the best at their jobs. They were working to get the most secure jobs. The most secure jobs are the ones that bad or lazy work doesn't endanger.

doctorpangloss 4 hours ago | parent | prev [-]

on the flip side, so much chatgpt usage, full of flaws, doesn't seem to really matter in various "critical domains." you can't generalize "critical."