Remix.run Logo
rvnx 4 hours ago

Very bad habit these safeguards. These "safety" filters are counter-productive and even can be dangerous.

In my place for example, a lot of doctors are using ChatGPT both to search diagnosis and communicate with non-English speaking patients.

Even yourself, when you want to learn about one disease, about some real-world threats, some statistics, self-defense techniques, etc.

Otherwise it's like blocking Wikipedia for the reason that using that knowledge you can do harmful stuff or read things that may change your mind.

Freedom to read about things is good.

NicuCalcea 4 hours ago | parent | next [-]

> a lot of doctors are using ChatGPT both to search diagnosis and communicate with non-English speaking patients

I think that's the problem. Who's going to claim responsibility when ChatGPT hallucinates or mistranslates a patient's diagnosis and they die? For OpenAI, this would at best be a PR nightmare, so that's why they have safeguards.

rvnx 3 hours ago | parent | next [-]

Adults bear responsibility for choices about their own lives. In fact, the more educated they are, the better choices they can make.

A doctor who gets refused by ChatGPT doesn't stop needing to communicate with the patient; they fall back to a worse option (Google Translate, a family member interpreting, guessing). Refusal isn't safety, it's liability-shifting dressed up as safety.

If there's no doctor, no interpreter, no pharmacist, just a person with a sick kid and a phone, then "refuse and redirect to a professional" is advice from a world that doesn't exist for them. The refusal doesn't send them to a better option; there is no better option, it's a large majority of people on this planet.

Hell is paved of good intentions, but open-education and unlimited access to knowledge is very good.

It doesn't change the human nature of some people, bad people stay bad, good people stay good.

About PR, they're optimizing for not being the named defendant in a lawsuit or the subject of a bad news cycle, it's self-interest wearing benevolence as a costume.

This is because harms from answering are punishable (bad PR, unhappy advertisers, unhappy investors, unhappy politicians / dictators, unhappy lobbies, unhappy army, etc); but harms from refusing are invisible and unpunished.

42 minutes ago | parent | next [-]
[deleted]
NicuCalcea 2 hours ago | parent | prev [-]

> A doctor who gets refused by ChatGPT doesn't stop needing to communicate with the patient; they fall back to a worse option

I think AI proves the contrary. There are plenty of examples of things that are getting worse because of technological advancement, particularly AI. Software quality, writing, online discourse, misinformation have all suffered over the last few years. I truly believe the internet is a worse place than it was 5 years ago, and I can't imagine bringing that to medicine would work out differently.

The medical system shouldn't rely on falling back to crappy workarounds, it should aspire to build the best system it reasonably can.

hellohello2 4 hours ago | parent | prev [-]

The doctor would be responsible.

I had a choice better a doctor that used AI or not, I would much prefer one that did...

NicuCalcea 3 hours ago | parent [-]

The doctor would be responsible for the accuracy of their translation tool, something they can't verify but you expect them to use?

lacunary an hour ago | parent | next [-]

"what you see is all there is." it's generally much easier to verify something you've been made aware of than it is to know of it in the first place (and still verify it.)

rvnx an hour ago | parent [-]

The irony is that licensed interpreters / translators usually perform worse than AI.

Only the liability shifts from OpenAI to them.

Furthermore, where the alternative to a licensed professional was nothing, or a random untrained person or a weak professional, then it's harming the user on the pretext of protecting him.

(like in the other mentioned contexts).

rvnx 3 hours ago | parent | prev [-]

What's the alternative then ?

-> You are in China, you go to emergency, nobody speaks your language

Move hands ? DeepSeek is better than using hands, even Baidu Translate, ChatGPT or whatever you find.

Other solutions are theoretically nice on paper but almost delusional.

An imperfect solution is better than no solution.

==

Similarly, a deaf-person is theorically better with a certified interpreter that can talk with the hands, but they may prefer voice-recognition software or AI tools.

(or... talking with hands is more confusing and annoying or less understandable for them).

Of course ChatGPT transcription can have issues, but that's the difference between the real-world and Silicon Valley's disconnected lawyers world.

==

If ChatGPT says: "sorry I won't be able, please go to see a licensed interpreter, good luck!" then it's just OpenAI trying to save their asses, at your risk/expense.

If you have a choice, you can make the choice, and you can double-check what is said. In other cases, you have no choice, nothing to check, only problems but no hints of solutions.

This is why openness is important.

NicuCalcea 2 hours ago | parent | next [-]

When I registered with my GP in the UK, they asked me whether I would need an interpreter and what language. They then provide professional interpreters.

https://www.england.nhs.uk/interpreting/

duchef 3 hours ago | parent | prev [-]

We generally use translator telephone services. There is an entire industry for is - i.e. I used 'BigWord' today.

timedude 4 hours ago | parent | prev [-]

Yup, deliberately making the model retarded