Remix.run Logo
nomel 9 hours ago

> It is irresponsible for these companies

I would claim that ignoring the "ChatGPT is AI and can make mistakes. Check important info." text, right under the query they type in client, is clearly more irresponsible.

I think that a disclaimer like that is the most useful and reasonable approach for AI.

"Here's a tool, and it's sometimes wrong." means the public can have access to LLMs and AI. The alternative, that you seem to be suggesting (correct me if I'm wrong), means the public can't have access to an LLM until they are near perfect, which means the public can't ever have access to an LLM, or any AI.

What do you see as a reasonable approach to letting the public access these imperfect models? Training? Popups/agreement after every question "I understand this might be BS"? What's the threshold for quality of information where it's no longer considered "broken"? Is that threshold as good as or better than humans/news orgs/doctors/etc?

ytoawwhra92 7 hours ago | parent | next [-]

Why are you assuming that the general public ought to have access to imperfect tools?

I live in a place where getting a blood test requires a referral from a doctor, who is also required to discuss the results with you.

nomel 6 hours ago | parent | next [-]

> Why are you assuming that the general public ought to have access to imperfect tools?

Could you tell me which source of information do you see as "perfect" (or acceptable) that you see as a good example of a threshold for what you think the public should and should not have access to?

Also, what if a tool still provides value to the user, in some contexts, but not to others, in different contexts (for example, using the tool wrong)?

For the "tool" perspective, I've personal never seen a perfect tool. Do you have an example?

> I live in a place where getting a blood test requires a referral from a doctor, who is also required to discuss the results with you.

I don't see how this is relevant. In the above article, the user went to their doctor for advice and a referral. But, in the US (and, many European countries) blood tests aren't restricted, and can be had from private labs out of pocket, since they're just measurements of things that exist in your blood, and not allowing you to know what's inside of you would be considered government overreach/privacy violation. Medical interpretations/advice from the measurements is what's restricted, in most places.

ytoawwhra92 6 hours ago | parent [-]

> Could you tell me which source of information do you see as "perfect" (or acceptable) that you see as a good example of a threshold for what you think the public should and should not have access to?

I know it when I see it.

> I don't see how this is relevant.

It's relevant because blood testing is an imperfect tool. Laypeople lack the knowledge/experience to identify imperfections and are likely to take results at face value. Like the author of the article did when ChatGPT gave them an F for their cardiac health.

> Medical interpretations/advice from the measurements is what's restricted, in most places.

Do you agree with that restriction?

nomel 6 hours ago | parent [-]

> I know it when I see it.

This isn't a reasonable answer. No action can be taken and no conclusion/thought can be made from it.

> Do you agree with that restriction?

People should be able to perform and be informed about their own blood measurements, and possibly bring something up with their doctors outside of routine exams (which they may not even be insured for in the US). I think the restriction on medical advice/conclusion, that results in treatment, is very good, otherwise you end up with "Wow, look at these results! you'll have to buy my snake oil or you'll die!".

I don't believe in reducing society to a level that completely protects the most stupid of us.

ytoawwhra92 5 hours ago | parent [-]

> This isn't a reasonable answer.

Sure it is. The world runs on human judgement. If you want me to rephrase I could say that the threshold for imperfection should reflect contemporary community standards, but Stewart's words are catchier.

> I think the restriction on medical advice/conclusion, that results in treatment, is very good, otherwise you end up with "Wow, look at these results! you'll have to buy my snake oil or you'll die!".

Some people would describe this as an infringement on their free speech and bodily autonomy.

Which is to say that I think you and I agree that people in general need the government to apply some degree of restriction to medicine, we just disagree about where the line is.

But I think if I asked you to describe to me exactly where the line is you'd ultimately end up at some incarnation of "I know it when I see it".

Which is fine. Even good, I think.

> I don't believe in reducing society to a level that completely protects the most stupid of us.

This seems at odds with what you said above. A non-stupid person would seek multiple consistent opinions before accepting medical treatment, after all.

nomel an hour ago | parent [-]

> I know it when I see it.

What's the most complex (in an information rich way) tool that you have seen?

kolinko an hour ago | parent | prev [-]

> I live in a place where getting a blood test requires a referral from a doctor, who is also required to discuss the results with you.

You’re saying it like it’s a good thing.

coffeefirst 4 hours ago | parent | prev | next [-]

Oh I have a plan for this.

Allow it to answer general questions about health, medicine and science.

It can’t practice medicine, it can only be a talking encyclopedia that tells you how the heart works and how certain biomarkers are used. Analyzing your specific case or data is off limits.

And then when the author asks his question, it says it’s not designed to do that.

zdragnar 9 hours ago | parent | prev | next [-]

> Popups/agreement after every question "I understand this might be BS"?

Considering the number of people who take LLM responses as authoritative Truth, that wouldn't be the worst thing in the world.

throwaway290 2 hours ago | parent | prev | next [-]

> "ChatGPT is AI and can make mistakes. Check important info."

Is the same thing that can be said about any human

> "Doctor is human and can make mistakes"

Therefore it's really not sufficient to make it clear that it is wrong in different ways and worse than human.

anon7000 an hour ago | parent | prev [-]

The problem is that AI companies are selling, advertising, and shipping AI as a tool that works most of the time for what you ask it to do. That’s deeply misleading.

The product itself is telling you in plain English that it’s ABSOLUTELY CERTAIN about its answer… even when you challenge it and try to rebut it. And the text of the product itself is much more prominent than the little asterisk “oh no, it’s actually lying because the LLM can never be that certain.” That’s clearly not a responsible product.

I opened the ChatGPT app right now and there is literally nothing about double checking results. It just says “ask anything,” in no uncertain terms, with no fine print.

Here’s a recent ad from OpenAI: https://youtu.be/uZ_BMwB647A, and I quote “Using ChatGPT allowed us to really feel like we have the facts and our doctor is giving us his expertise, his experience, his gut instinct” related to a severe health question.

And another recent ad related to analyzing medical scans: “What’s wonderful about ChatGPT is that it can be that cumulative source of information, so that we can make the best choices.” (https://youtu.be/rXuKh4e6gw4)

And yet another recent ad, where lots of users are using ChatGPT to get authoritative answers to health questions. They even say you can take a picture of a meal before you eat and after you eat, and have it generate the amount of calories you ate! Just based on the difference between the pictures! How has that been tested and verified? (https://youtu.be/305lqu-fmbg)

Now, some of the ads have users talking to their doctors, which is great.

But they are clearly marketing ChatGPT as the tool to use if you want to arrive at the truth. No asterisks. No “but sometimes it’s wrong and you won’t be able to tell.” There’s nothing to misunderstand about these ads: OpenAI is telling you that ChatGPT is trustworthy.

So I reject the premise that it’s the user’s fault for not using enough caution with these tools. OpenAI is practically begging you to jump in and use it for personal, life or death type decisions, and does very little to help you understand when it may be wrong.