| ▲ | HarHarVeryFunny 2 hours ago | |||||||
Yes, of course any company is responsible for what they ship, regardless of what tools were used to develop it. However, at least in the US, it is usual for companies to advise against use of their products in a way that may cause harm, and we certainly don't see that from the LLM vendors. We see them claim the tech to be near human level, capable of replacing human software developers (a job that requires extreme responsibility), and see them withholding models that they say are dangerous (encouraging you to think that the ones they release are safe). Where are the warnings that "product may fail to follow instructions", and "may fail to follow safety instructions"? Where is the warning not to give the LLM agency and let it control anything where there are financial/safety/etc consequences to failure to follow instructions? | ||||||||
| ▲ | davidatbu 2 hours ago | parent [-] | |||||||
Well, of the top of my head, both chatgpt.com and Gemini have text on their home page to the effect of "AI can make mistakes". I'll bet a few bucks such copy can be found in other places, including the terms of service. | ||||||||
| ||||||||