Remix.run Logo
aerhardt 12 hours ago

I'm confused. The article opens with:

> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.

This already seems to contradict what you're saying.

But then:

> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”

This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.

layer8 12 hours ago | parent | next [-]

https://xcancel.com/thekaransinghal/status/19854160578054965...

This is from Karan Singhal, Health AI team lead at OpenAI.

Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”

siva7 12 hours ago | parent [-]

I doubt his claims as i use chatgpt everyday heavily for medical advice (my profession) and it's responding differently now than before.

layer8 11 hours ago | parent | next [-]

Maybe the usage policies are part of the system prompt, and ChatGPT is misreading the new wording as well. ;)

tiahura 8 hours ago | parent | prev [-]

Lawyer here. Not noticing a change.

A4ET8a8uTh0_v2 12 hours ago | parent | prev | next [-]

The article itself notes:

'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'

gcr 12 hours ago | parent | prev [-]

I think this is wrong. Others in this thread are noticing a change in ChatGPT's behavior for first-party medical advice.

simonw 11 hours ago | parent [-]

But OpenAI's head of Health AI says that ChatGPT's behavior has not changed: https://xcancel.com/thekaransinghal/status/19854160578054965... and https://x.com/thekaransinghal/status/1985416057805496524

I trust what he says over general vibes.

(If you think he's lying, what's your theory on WHY he would lie about a change like this?)

degamad 10 hours ago | parent | next [-]

Also possible: he's unaware of a change implemented elsewhere that (intentionally or unintentionally) has resulted in a change of behaviour in this circumstance.

(e.g. are the terms of service, or exerpts of it, available in the system prompt or search results for health questions? So a response under the new ToS would produce different outputs without any intentional change in "behaviour" of the model.)

nh43215rgb 10 hours ago | parent | prev [-]

My theory is that he believes 1) people will trust him over what general public say, and 2) this kind of claim is hard to verify to prove him wrong.

simonw 9 hours ago | parent [-]

That doesn't answer why he would lie about this, just why the thinks he would get away with it. What's his motive?