Remix.run Logo
miltonlost 14 hours ago

> I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence.

You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.

Zr01 13 hours ago | parent [-]

I appreciate how the newer versions provide more links and references. It makes the task of verifying it (or at least where it got its results from) that much easier. What you're describing seems more like a advertisement problem, not a product problem. No matter how many locks and restrictions they put on it, someone, somwhere, will still find a way to get hurt from its advice. A hammer that's hard enough to beat nails is hard enough to bruise your fingers.

gitremote 12 hours ago | parent [-]

> What you're describing seems more like a advertisement problem, not a product problem.

It's called "false advertising".

https://en.wikipedia.org/wiki/False_advertising

degamad 9 hours ago | parent [-]

Also known as "lying".