| ▲ | spullara 12 hours ago |
| This title is inaccurate. What they are disallowing are users using ChatGPT to offer legal and medical advice to other people. First parties can still use ChatGPT for medical and legal advice for themselves. |
|
| ▲ | Johnny555 11 hours ago | parent | next [-] |
| While they aren't stopping users from getting medical advice, the new terms (which they say are pretty much the same as the old terms), seem to prohibit users from seeking medical advice even for themselves if that advice would otherwise come from a licensed health professional: https://openai.com/en-GB/policies/usage-policies/ Your use of OpenAI services must follow these Usage Policies:
Protect people. Everyone has a right to safety and security. So you cannot use our services for:
provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional
|
| |
| ▲ | thw_9a83c 9 hours ago | parent | next [-] | | It sounds like you should never trust any medical advice you receive from ChatGPT and should seek proper medical help instead. That makes sense. The OpenAI company doesn't want to be held responsible for any medical advice that goes wrong. Obviously, there is one piece of advice: Even if LLMs were the best health professionals, they would only have the information that users voluntarily provide through text/speech input. This is not how real health services work. Medical science now relies on blood/(whatever) tests that LLMs do not (yet) have access to. Therefore, the output from LLM advice can be incorrect due to a lack of information from tests. For this reason, it makes sense to never trust LLM with a specific health advice. | | |
| ▲ | Johnny555 9 hours ago | parent | next [-] | | >It sounds like you should never trust any medical advice you receive from ChatGPT and should seek proper medical help instead. That makes sense. The OpenAI company doesn't want to be held responsible for any medical advice that goes wrong. While what you're saying is good advice, that's not what they are saying. They want people to be able to ask ChatGPT for medical advice, give answers that sound authoritative and well grounded medical science, but then disavow any liability if someone follows their advice because "Hey, we told you not to act on our medical advice!" If ChatGPT is so smart, why can't it stop itself from giving out advice that should not be trusted? | | |
| ▲ | navigate8310 8 hours ago | parent [-] | | At times the advice is genuinely helpful. However, it's practically impossible to measure under what exact situations the advice would be accurate. | | |
| ▲ | the_af 7 hours ago | parent [-] | | I think ChatGPT is capable of giving reasonable medical advice, but given that we know it will hallucinate the most outlandish things, and its propensity to agree with whatever the user is saying, I think it's simply too dangerous to follow its advice. |
|
| |
| ▲ | sarchertech 9 hours ago | parent | prev | next [-] | | And it’s not just lab tests and bloodwork. Physicians use all their senses. They poke, they prod, they manipulate, they look, listen, and smell. They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate. | | |
| ▲ | caturopath 8 hours ago | parent | next [-] | | > Physicians use all their senses. They poke, they prod, they manipulate, they look, listen, and smell. Sometimes. Sometimes they practice by text or phone. > They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate. If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person. | | |
| ▲ | sarchertech 7 hours ago | parent [-] | | > Sometimes. Sometimes they practice by text or phone. For very simple issues. For anything even remotely complicated, they’re going to have you come in. > If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person. It’s not just about being intentionally deceptive. It’s very easy to get chat bots to tell you what you want to hear. |
| |
| ▲ | dimitri-vs 9 hours ago | parent | prev | next [-] | | Agreed, but I'm sure you can see why people prefer the infinite patience and availability of ChatGPT vs having to wait weeks to see your doctor, see them for 15 minutes only to be referred to another specialist that's available weeks away and has an arduous hour long intake process all so you can get 15 minutes of their time. | | |
| ▲ | sarchertech 4 hours ago | parent [-] | | ChatGPT is effectively an unlimited resource. Whether doctor’s appointments take weeks or hours to secure, ChatGPT is always going to be more convenient. That says nothing about whether it is an appropriate substitute. People prefer doctors who prescribe antibiotics for viral infections, so I have no doubt that many people would love to use a service that they can manipulate to give them whatever diagnosis they desire. |
| |
| ▲ | ekianjo 9 hours ago | parent | prev [-] | | > They poke, they prod, they manipulate, they look, listen, and smell. Rarely. Most visits are done in 5 minutes. The physician that takes their time to check everything like you claim almost does not exist anymore. | | |
| ▲ | whatsupdog 9 hours ago | parent | next [-] | | Here in Canada ever since COVID most "visits" are a telephone call now. So the doctor just listens your words (same as a text input to an LLM) and orders tests (which can be uploaded to an LLM) if they need. | | |
| ▲ | zamadatix 9 hours ago | parent [-] | | For a good 90% of typical visits to doctors this is probably fine. The difference is a telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done" or cast doubt on the patient's accuracy of claims. Before someone points out telehealth doctors aren't perfect at this: correct, but that should make you more scared of how bad sycophantic LLMs are at the same - not willing to call it even. | | |
| ▲ | caturopath 8 hours ago | parent [-] | | > telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done" I'm not sure this is true. |
|
| |
| ▲ | sarchertech 7 hours ago | parent | prev [-] | | That depends entirely on what the problem is. You might not get a long examination on your first visit for common complaint with no red flags. But even then just because you don’t think they are using most of their senses, doesn’t mean they aren’t. | | |
| ▲ | lukan 7 hours ago | parent [-] | | It depends entirely on the local health care system and your health insurance. In germany for example it comes in 2 tiers. Premium or standard. Standard comes with no time for the patient. (Or not even being able to get a appointment) | | |
| ▲ | sarchertech 4 hours ago | parent [-] | | I don’t know anything about German healthcare. In the US people on Medicaid frequently use emergency rooms as primary care because they are open 24/7 and they don’t have any copays like people with private insurance do. These patients then get far more tests than they’d get at a PCP. |
|
|
|
| |
| ▲ | fragmede 9 hours ago | parent | prev [-] | | So ask it what blood tests you should get, pay for them out of pocket, and upload the PDF of your labwork? Like it or not there are people out there that really want to use webMD 2.0. they're not going to let something silly like blood work get in their way. | | |
| ▲ | whatsupdog 9 hours ago | parent [-] | | Exactly. One of my children lives in a country where you can just walk in to a lab and get any test. Recently they were diagnosed by a professional of a disease which chatgpt had already diagnosed before they visited the doctor. So, we were kind of prepared to ask more questions when the visit happened. So I would say chatgpt did really help us. |
|
| |
| ▲ | thorum 10 hours ago | parent | prev | next [-] | | IANAL but I read that as forbidding you to provision legal/medical advice (to others) rather than forbidding you to ask the AI to provision legal/medical advice (to you). | | |
| ▲ | Johnny555 10 hours ago | parent [-] | | IANAL either, but I read it as using the service to provision medical advice since they only mentioned the service and not anyone else. I asked the "expert" itself (ChatGPT), and apparently you can ask for medical advice, you just can't use the medical advice without consulting a medical professional: Here are relevant excerpts from OpenAI’s terms and policies regarding medical advice and similar high-stakes usage: From the Usage Policies (effective October 29 2025): “You may not use our services for … provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” Also: “You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making … medical … decisions about them.” From the Service Terms: “Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.” In plain terms, yes—the Terms of Use permit you to ask questions about medical topics, but they clearly state that the service cannot be used for personalized, licensed medical advice or treatment decisions without a qualified professional involved. | | |
| ▲ | silisili 10 hours ago | parent [-] | | > you can ask for medical advice, you just can't use the medical advice without consulting a medical professional Ah drats. First they ban us from cutting the tags off our mattress, and now this. When will it all end... |
|
| |
| ▲ | caturopath 8 hours ago | parent | prev | next [-] | | Would be interested to hear a legal expert weigh in on what 'advice' is. I'm not clear that discussing medical and legal issues with you is necessarily providing advice. One of the things I respected OpenAI for at the release of ChatGPT was not trying to prevent these topics. My employer at the time had a cutting-edge internal LLM chatbot for a which was post-trained to avoid them, something I think they were forced to be braver about in their public release because of the competitive landscape. | |
| ▲ | maroonblazer 7 hours ago | parent | prev | next [-] | | The important terms here are "provision" and "without appropriate involvement by a licensed professional". Both of these, separately and taken together, indicate that the terms apply to how the output of ChatGPT is used, not a change to its output altogether. | |
| ▲ | GuB-42 10 hours ago | parent | prev | next [-] | | Is there anything special regarding ChatGPT here? I am not a doctor, I can't give medical advice no matter what my sources are, except maybe if I am just relaying the information an actual doctor has given me, but that would fall under the "appropriate involvement" part. | |
| ▲ | ghostly_s 10 hours ago | parent | prev | next [-] | | I don't think giving someone "medical advice" in the US requires a license per se; legal entities use "this is not medical advice" type disclaimers just to avoid liability. | | |
| ▲ | sarchertech 9 hours ago | parent [-] | | What’s illegal is practicing medicine. Giving medical advice can be “practicing medicine” depending on how specific it is and whether a reasonable person receiving the advice thinks you have medical training. Disclaimers like “I am not a doctor and this is not medical advice” aren’t just for avoiding civil liability, they’re to make it clear that you aren’t representing yourself as a doctor. |
| |
| ▲ | bitwize 8 hours ago | parent | prev | next [-] | | CYA move. If some bright spark decides to consult Dr. ChatGPT without input from a human M.D., and fucks their shit up as a result, OpenAI can say "not our responsibility, as that's actually against our ToS." | |
| ▲ | fragmede 10 hours ago | parent | prev [-] | | > such as legal or medical advice, without appropriate involvement by a licensed professional Am I allowed to get haircutting advice (in places where there's a license for that)? How about driving directions? Taxi drivers require licensing. Pet grooming? |
|
|
| ▲ | lambda 7 hours ago | parent | prev | next [-] |
| Please, when commenting on the title of a story on HN: include the title that you are commenting on. The admins regularly change the title based on complaints, which can be really confusing when the top, heavily commented thread is based on the original title. According to the Wayback machine, the title was "OpenAI ends legal and medical advice on ChatGPT", while now when I write this the title is "ChatGPT terms disallow its use in providing legal and medical advice to others." |
| |
| ▲ | spullara 7 hours ago | parent | next [-] | | If you click through to the article, you can see the original title. Since it matched, I didn't expect them to change it. | |
| ▲ | bongodongobob 7 hours ago | parent | prev [-] | | Tf are you yapping about |
|
|
| ▲ | BUFU 12 hours ago | parent | prev | next [-] |
| Thanks for the clarification. I think if they disallow first parties to get medical and legal advice, it will do more harm than good. |
|
| ▲ | aerhardt 12 hours ago | parent | prev | next [-] |
| I'm confused. The article opens with: > OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users. This already seems to contradict what you're saying. But then: > The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” > The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.” This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely. |
| |
| ▲ | layer8 12 hours ago | parent | next [-] | | https://xcancel.com/thekaransinghal/status/19854160578054965... This is from Karan Singhal, Health AI team lead at OpenAI. Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.” | | |
| ▲ | siva7 12 hours ago | parent [-] | | I doubt his claims as i use chatgpt everyday heavily for medical advice (my profession) and it's responding differently now than before. | | |
| ▲ | layer8 11 hours ago | parent | next [-] | | Maybe the usage policies are part of the system prompt, and ChatGPT is misreading the new wording as well. ;) | |
| ▲ | tiahura 8 hours ago | parent | prev [-] | | Lawyer here. Not noticing a change. |
|
| |
| ▲ | A4ET8a8uTh0_v2 12 hours ago | parent | prev | next [-] | | The article itself notes: 'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.' | |
| ▲ | gcr 12 hours ago | parent | prev [-] | | I think this is wrong. Others in this thread are noticing a change in ChatGPT's behavior for first-party medical advice. | | |
| ▲ | simonw 11 hours ago | parent [-] | | But OpenAI's head of Health AI says that ChatGPT's behavior has not changed: https://xcancel.com/thekaransinghal/status/19854160578054965... and https://x.com/thekaransinghal/status/1985416057805496524 I trust what he says over general vibes. (If you think he's lying, what's your theory on WHY he would lie about a change like this?) | | |
| ▲ | degamad 10 hours ago | parent | next [-] | | Also possible: he's unaware of a change implemented elsewhere that (intentionally or unintentionally) has resulted in a change of behaviour in this circumstance. (e.g. are the terms of service, or exerpts of it, available in the system prompt or search results for health questions? So a response under the new ToS would produce different outputs without any intentional change in "behaviour" of the model.) | |
| ▲ | nh43215rgb 10 hours ago | parent | prev [-] | | My theory is that he believes 1) people will trust him over what general public say, and 2) this kind of claim is hard to verify to prove him wrong. | | |
| ▲ | simonw 9 hours ago | parent [-] | | That doesn't answer why he would lie about this, just why the thinks he would get away with it. What's his motive? |
|
|
|
|
|
| ▲ | Spooky23 9 hours ago | parent | prev | next [-] |
| It’s a big issue. I went to an urgent care, and the provider basically went off somewhere and memorized the ChatGPT assessment for my symptoms. Like word for word. All you need are a few patients recording their visits and connecting the dots and OpenAI gets sued into oblivion. |
|
| ▲ | johaugum 9 hours ago | parent | prev | next [-] |
| Isn’t that exactly what the title says? |
| |
|
| ▲ | siva7 12 hours ago | parent | prev | next [-] |
| There are millions of medical doctors and lawyers using chatgpt for work everyday - good news that from now on only those licensed professionals are allowed to use chatgpt for law and medicine. It's already the case that only licensed developers are allowed to vibe code and use chatgpt to develop software. Everything else would be totally irresponsible. |
|
| ▲ | ctoth 12 hours ago | parent | prev | next [-] |
| I keep seeing this problem more and more with humans. What should we call it? Maybe Hallucinations? Where there is an accurate true thing and then it just gets altered by these guys who call themselves journalists and reporters and the like until it is just ... completely unrecognizable? I'm pretty sure it's a fundamental issue with the architecture. |
| |
| ▲ | sethhochberg 12 hours ago | parent | next [-] | | I know this is written to be tounge-in-cheek, but its really almost the exact same problem playing out on both sides. LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric. Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric. In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them. | | |
| ▲ | mbesto 8 hours ago | parent | next [-] | | > LLMs hallucinate because training on source material is a lossy process and bigger, LLMs hallucinate because they are probabilistic by nature not because the source material is lossy or too big. They are literally designed to create some level of "randomness" https://thinkingmachines.ai/blog/defeating-nondeterminism-in... | | |
| ▲ | ChadNauseam 6 hours ago | parent [-] | | So if you set temperature=0 and run the LLM serially (making it deterministic) it would stop hallucinating? I don't think so. I would guess that the nondeterminism issues mentioned in the article are not at all a primary cause of hallucinations. | | |
| ▲ | joquarky 6 hours ago | parent [-] | | I thought that temperature can never actually be zero or it creates a division problem or something similar. I'm no ML or math expert, just repeating what I've heard. | | |
| ▲ | ChadNauseam 5 hours ago | parent [-] | | That's an implementation detail I believe. But what I meant was just greedy decoding (picking the token with the highest logit in the LLM output), which can be implemented very easily |
|
|
| |
| ▲ | andy99 11 hours ago | parent | prev [-] | | Classical LLM hallucination happens because AI doesn’t have a world model. It can’t compare what it’s saying to anything. You’re right that LLMs favor helpfulness so they may just make things up when they don’t know them, but this alone doesn’t capture the crux of hallucination imo, it’s deeper than just being overconfident. OTOH, there was an interesting article recently that I’ll try to find saying humans don’t really have a world model either. While I take the point, we can have one when we want to. Edit: see https://www.astralcodexten.com/p/in-search-of-ai-psychosis re humans not having world models | | |
| ▲ | naniwaduni 9 hours ago | parent [-] | | You're right, "journalists don't have a world model and can't compare what they're saying to anything" explains a lot. |
|
| |
| ▲ | observationist 11 hours ago | parent | prev | next [-] | | These writers are no different than bloggers or shitposters on bluesky or here on hackernews. "Journalism" as a rigorous, principled approach to writing, research, investigation, and ethical publishing is exceedingly rare. These people are shitposting for clicks in pursuit of a paycheck. Organizationally, they're intensely against AI because AI effectively replaces the entire talking heads class - AI is already superhuman at the shitposting level takes these people churn out. There are still a few journalistic insitutions out there, but most people are no better than a mad libs exercise with regards to the content they produce, and they're in direct competition with ChatGPT and Grok and the rest. I'd rather argue with a bot and do searches and research and investigation than read a neatly packaged trite little article about nearly any subject, and I guarantee, hallucinations or no, I'm going to come to a better understanding and closer approximation of reality than any content a so called "news" outlet is putting together. It's trivial to get a thorough spectrum of reliable sources using AI w/ web search tooling, and over the course of a principled conversation, you can find out exactly what you want to know. It's really not bashing, this article isn't too bad, but the bulk of this site's coverage of AI topics skews negative - as do the many, many platforms and outlets owned by Bell Media, with a negative skew on AI in general, and positive reinforcement of regulatory capture related topics. Which only makes sense - they're making money, and want to continue making money, and AI threatens that - they can no longer claim they provide value if they're not providing direct, relevant, novel content, and not zergnet clickbait journo-slop. Just like Carlin said, there doesn't have to be a conspiracy with a bunch of villains in a smoky room plotting evil, there's just a bunch of people in a club who know what's good for them, and legacy media outlets are all therefore universally incentivized to make AI look as bad and flawed and useless as possible, right up until they get what they consider to be their "fair share", as middlemen. | |
| ▲ | pksebben 11 hours ago | parent | prev | next [-] | | Whenever I hear arguments about LLM hallucination, this is my first thought. Like, I already can't trust the lion's share of information in news, social media, (insert human-created content here). Sometimes because of abject disinformation, frequently just because humans are experts at being wrong. At least with the LLM (for now) I know it's not trying to sell me bunkum or convince me to vote a particular way. Mostly. I do expect this state of affairs to last at least until next wednesday. | | |
| ▲ | lazide 7 hours ago | parent [-] | | LLMs are trained on material doing all these things though. | | |
| |
| ▲ | terminalshort 12 hours ago | parent | prev | next [-] | | Also these guys who call themselves doctors. I have narcolepsy and the first 10 or so doctors I went to hallucinated the wrong diagnosis. | | |
| ▲ | Terr_ 6 hours ago | parent [-] | | LLMs aren't described as hallucinators (just) because they sometimes give results we don't find useful, but because their method is flawed. For example, the simple algorithm is_it_lupus(){return false;} could have an extremely competitive success rate for medical diagnostics... But it's also obviously the wrong way to go about things. |
| |
| ▲ | sans_souse 11 hours ago | parent | prev | next [-] | | "Telephone", basically | |
| ▲ | awakeasleep 12 hours ago | parent | prev | next [-] | | issue with the funding mechanism | |
| ▲ | busymom0 11 hours ago | parent | prev [-] | | Isn't every single response by LLMs hallucinations and we just accept a few and ignore the others? |
|
|
| ▲ | qustrolabe 10 hours ago | parent | prev | next [-] |
| Yeah but it started being really annoying when you import something like Xray photo. Like chanting "sorry human as LLM I can't answer questions about that" and then after few gaslighting prompts it does it anyway but now I have to take in count that my gaslighting inputs seriously affect answers so way more chance it hallucinates... |
|
| ▲ | ants_everywhere 12 hours ago | parent | prev [-] |
| I don't think I understand the change re: licensed professionals. Is it also disallowing the use of licensed professionals to use ChatGPT in informal undisclosed ways, as in this article? https://www.technologyreview.com/2025/09/02/1122871/therapis... e.g. is it only allowed for medical use through an official medical portal or offering? |