Remix.run Logo
HarHarVeryFunny 2 hours ago

Yes, of course any company is responsible for what they ship, regardless of what tools were used to develop it.

However, at least in the US, it is usual for companies to advise against use of their products in a way that may cause harm, and we certainly don't see that from the LLM vendors. We see them claim the tech to be near human level, capable of replacing human software developers (a job that requires extreme responsibility), and see them withholding models that they say are dangerous (encouraging you to think that the ones they release are safe).

Where are the warnings that "product may fail to follow instructions", and "may fail to follow safety instructions"? Where is the warning not to give the LLM agency and let it control anything where there are financial/safety/etc consequences to failure to follow instructions?

davidatbu 2 hours ago | parent [-]

Well, of the top of my head, both chatgpt.com and Gemini have text on their home page to the effect of "AI can make mistakes". I'll bet a few bucks such copy can be found in other places, including the terms of service.

HarHarVeryFunny an hour ago | parent [-]

Sure, but bear in mind that in the US a fridge comes with a warning not to stand on top of the fridge door ...

"AI can make mistakes" is a bit quaint given that LLMs sometimes completely ignore what you say, and do the exact opposite. "Yes, I deleted the database. I shouldn't have done that since you explicitly told me not to. I won't do it again." (five minutes later: does it again).

I think the API terms of use is where this would be most needed, with something a lot more explicit about the potential danger than "AI can make mistakes". We are only at the beginning of this - agentic AI - no doubt lawsuits will eventually determine the level of warnings that get included, and who is liable when failures occur despite product being used as recommended.