| ▲ | davidatbu 2 hours ago | |
Well, of the top of my head, both chatgpt.com and Gemini have text on their home page to the effect of "AI can make mistakes". I'll bet a few bucks such copy can be found in other places, including the terms of service. | ||
| ▲ | HarHarVeryFunny an hour ago | parent [-] | |
Sure, but bear in mind that in the US a fridge comes with a warning not to stand on top of the fridge door ... "AI can make mistakes" is a bit quaint given that LLMs sometimes completely ignore what you say, and do the exact opposite. "Yes, I deleted the database. I shouldn't have done that since you explicitly told me not to. I won't do it again." (five minutes later: does it again). I think the API terms of use is where this would be most needed, with something a lot more explicit about the potential danger than "AI can make mistakes". We are only at the beginning of this - agentic AI - no doubt lawsuits will eventually determine the level of warnings that get included, and who is liable when failures occur despite product being used as recommended. | ||