Remix.run Logo
Zr01 14 hours ago

The cynic in me thinks this is just a means to eventually make more money by offering paid unrestricted versions to medical and legal professionals. I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence. Yet the same goes for just about any internet search. I don't think some people not knowing how to use it warrants restricting its functionality for the rest of us.

segmondy 14 hours ago | parent | next [-]

Nah, this is to avoid litigation. Who needs lawsuits when you are seeking profit? 1 loss of a major lawsuit is horrible, there's the case of folks suing them because their loved ones committed suicide after chatting with ChatGPT. They are doing everything to avoid getting dragged to court.

miltonlost 14 hours ago | parent | prev | next [-]

> I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence.

You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.

Zr01 13 hours ago | parent [-]

I appreciate how the newer versions provide more links and references. It makes the task of verifying it (or at least where it got its results from) that much easier. What you're describing seems more like a advertisement problem, not a product problem. No matter how many locks and restrictions they put on it, someone, somwhere, will still find a way to get hurt from its advice. A hammer that's hard enough to beat nails is hard enough to bruise your fingers.

gitremote 12 hours ago | parent [-]

> What you're describing seems more like a advertisement problem, not a product problem.

It's called "false advertising".

https://en.wikipedia.org/wiki/False_advertising

degamad 9 hours ago | parent [-]

Also known as "lying".

watwut 14 hours ago | parent | prev | next [-]

If they do that, they will be subject of regulations on medical devices. As they should be and means the end result will be less likely to promote complete crap then it is now.

scarmig 14 hours ago | parent [-]

And then users balk at the hefty fee and start getting their medical information from utopiacancercenter.com and the like.

fluidcruft 14 hours ago | parent | prev [-]

I think one of the challenges is attribution. For example if you use Google search to create a fraudulent legal filing there aren't any of Google's fingerprints on the document. It gets reported as malpractice. Whereas reporting on these things is OpenAI or whatever AI is responsible. So even from the perspective of protecting a brand it's unavoidable. Suppose (no idea if true) the Louvre robbers wore Nike shoes and the reporting were that Nike shoes were used to rob the Louvre and all anyone talks about is Nike and how people need to be careful about what they do wearing Nike shoes.

It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.

benrapscallion 13 hours ago | parent [-]

This (attribution) is exactly the issue that was mentioned by LexisNexis CEO in a recent The Verge interview.

https://www.theverge.com/podcast/807136/lexisnexis-ceo-sean-...