| ▲ | nomel 9 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||
> It is irresponsible for these companies I would claim that ignoring the "ChatGPT is AI and can make mistakes. Check important info." text, right under the query they type in client, is clearly more irresponsible. I think that a disclaimer like that is the most useful and reasonable approach for AI. "Here's a tool, and it's sometimes wrong." means the public can have access to LLMs and AI. The alternative, that you seem to be suggesting (correct me if I'm wrong), means the public can't have access to an LLM until they are near perfect, which means the public can't ever have access to an LLM, or any AI. What do you see as a reasonable approach to letting the public access these imperfect models? Training? Popups/agreement after every question "I understand this might be BS"? What's the threshold for quality of information where it's no longer considered "broken"? Is that threshold as good as or better than humans/news orgs/doctors/etc? | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | ytoawwhra92 7 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
Why are you assuming that the general public ought to have access to imperfect tools? I live in a place where getting a blood test requires a referral from a doctor, who is also required to discuss the results with you. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | coffeefirst 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
Oh I have a plan for this. Allow it to answer general questions about health, medicine and science. It can’t practice medicine, it can only be a talking encyclopedia that tells you how the heart works and how certain biomarkers are used. Analyzing your specific case or data is off limits. And then when the author asks his question, it says it’s not designed to do that. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | zdragnar 9 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
> Popups/agreement after every question "I understand this might be BS"? Considering the number of people who take LLM responses as authoritative Truth, that wouldn't be the worst thing in the world. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | throwaway290 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
> "ChatGPT is AI and can make mistakes. Check important info." Is the same thing that can be said about any human > "Doctor is human and can make mistakes" Therefore it's really not sufficient to make it clear that it is wrong in different ways and worse than human. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | anon7000 an hour ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
The problem is that AI companies are selling, advertising, and shipping AI as a tool that works most of the time for what you ask it to do. That’s deeply misleading. The product itself is telling you in plain English that it’s ABSOLUTELY CERTAIN about its answer… even when you challenge it and try to rebut it. And the text of the product itself is much more prominent than the little asterisk “oh no, it’s actually lying because the LLM can never be that certain.” That’s clearly not a responsible product. I opened the ChatGPT app right now and there is literally nothing about double checking results. It just says “ask anything,” in no uncertain terms, with no fine print. Here’s a recent ad from OpenAI: https://youtu.be/uZ_BMwB647A, and I quote “Using ChatGPT allowed us to really feel like we have the facts and our doctor is giving us his expertise, his experience, his gut instinct” related to a severe health question. And another recent ad related to analyzing medical scans: “What’s wonderful about ChatGPT is that it can be that cumulative source of information, so that we can make the best choices.” (https://youtu.be/rXuKh4e6gw4) And yet another recent ad, where lots of users are using ChatGPT to get authoritative answers to health questions. They even say you can take a picture of a meal before you eat and after you eat, and have it generate the amount of calories you ate! Just based on the difference between the pictures! How has that been tested and verified? (https://youtu.be/305lqu-fmbg) Now, some of the ads have users talking to their doctors, which is great. But they are clearly marketing ChatGPT as the tool to use if you want to arrive at the truth. No asterisks. No “but sometimes it’s wrong and you won’t be able to tell.” There’s nothing to misunderstand about these ads: OpenAI is telling you that ChatGPT is trustworthy. So I reject the premise that it’s the user’s fault for not using enough caution with these tools. OpenAI is practically begging you to jump in and use it for personal, life or death type decisions, and does very little to help you understand when it may be wrong. | ||||||||||||||||||||||||||||||||||||||||||||||||||