Remix.run Logo
sfblah 12 hours ago

Things like this really favor models offered from countries that have fewer legal restrictions. I just don't think it's realistic to expect people not to have access to these capabilities.

It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.

sarchertech 4 hours ago | parent | next [-]

>I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street

That’s not how companies market AI though. And the models themselves tend to present their answers in a highly confident manner.

Without explicit disclaimers, a reasonable person could easily believe that ChatGPT is an authority in the law or medicine. That’s what moves the needle over to practicing the law/medicine without a license.

thinkingtoilet 12 hours ago | parent | prev [-]

What capabilities? The article says the study found it was entirely correct 31% of the time.

Jweb_Guru 3 hours ago | parent | next [-]

This is what really scares me about people using AI. It will confidently hallucinate studies and quotes that have absolutely no basis in reality, and even in your own field you're not going to know whether what it's saying is real or not without following up on absolutely every assertion. But people are happy to completely buy its diagnoses of rare medical conditions based on what, exactly?

simianwords 24 minutes ago | parent [-]

Give a single example using gpt-5 thinking.

matwood an hour ago | parent | prev | next [-]

The study is more positive than the 31% conveys.

https://www.ctvnews.ca/health/article/self-diagnosing-with-a...

The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough.

In one example, the chatbot confidently diagnosed a patient’s rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves.

...

While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice.

“When you do get a response be sure to validate that response,” said Zada.

Which should be standard advice in most situations.

ipaddr 3 hours ago | parent | prev [-]

Does it say how often doctors are correct as a baseline?