| ▲ | Pedro_Ribeiro 6 days ago |
| Curious to what you would think if this kid downloaded an open source model and talked to it privately. Would his blood be on the hands of the researchers who trained that model? |
|
| ▲ | idle_zealot 6 days ago | parent | next [-] |
| Then it's like cigarettes or firearms. As a distributor you're responsible for making clear the limitations, safety issues, etc, but assuming you're doing the distribution in a way that isn't overly negligent then the user becomes responsible. If we were facing a reality in which these chat bots were being sold for $10 in the App Store, then running on end-user devices and no longer under the control of the distributors, but we still had an issue with loads of them prompting users into suicide, violence, or misleading them into preparing noxious mixtures of cleaning supplies, then we could have a discussion about exactly what extreme packaging requirements ought to be in place for distribution to be considered responsible. As is, distributed on-device models are the purview of researchers and hobbyists and don't seem to be doing any harm at all. |
| |
| ▲ | Pedro_Ribeiro 6 days ago | parent | next [-] | | Mhm but I don't believe inherently violent and dangerous things like guns and cigarretes are comparable to simple technology. Should the creators of Tornado Cash be in prison for what they have enabled? You can jail them but the world can't go back, just like it can't go back when a new OSS model is released. It is also much easier to crack down on illegal gun distribution than to figure out who uploaded the new model torrent or who deployed the latest zk innovation on Ethereum. I don't think your hypothetical law will have the effects you think it will. --- I also referenced this in another reply but I believe the government controlling what can go on a publicly distributed AI model is a dangerous path and probably inconstitucional. | |
| ▲ | rsynnott 6 days ago | parent | prev [-] | | > but we still had an issue with loads of them prompting users into suicide, violence, or misleading them into preparing noxious mixtures of cleaning supplies, then we could have a discussion about exactly what extreme packaging requirements ought to be in place for distribution to be considered responsible. Or, I mean, just banning sale on the basis that they're unsafe devices and unfit for purpose. Like, you can't sell, say, a gas boiler that is known to, due to a design flaw, leak CO into the room; sticking a "this will probably kill you" warning on it is not going to be sufficient. | | |
| ▲ | idle_zealot 6 days ago | parent [-] | | In that extreme case the "packaging requirements" would be labeling the thing not as a boiler, but as dangerous explosive scrap. | | |
| ▲ | rsynnott 6 days ago | parent [-] | | I suspect in many places you couldn’t sell what would essentially be a device for poisoning people to consumers _at all_. |
|
|
|
|
| ▲ | hattmall 6 days ago | parent | prev | next [-] |
| I would say no. Someone with the knowledge and motivation to do those things is far less likely to be overly influenced by the output and if they were they are much more aware of what exactly they are doing with regard to using the model. |
| |
| ▲ | Pedro_Ribeiro 6 days ago | parent [-] | | So if a hypothetical open source enthusiast who fell in love with GPT-OSS and killed his real wife because the AI told him to should only be himself held accountable, where as if it were GPT-5 commanding him to commit the same crime, it would extend into OpenAI's responsability? Your logic sounds reasonable in theory but on paper it's a slippery slope and hard to define objectively. On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions. I suspect your suggestion will be how it ends up in Europe and get rejected in the US. | | |
| ▲ | novok 6 days ago | parent | next [-] | | After a certain point, people are responsible for what they do when they see certain words, especially words they know to be potentially inaccurate, fictional and have a lot of time to consider the actual reality of. A book is not responsible for people doing bad things, they are themselves. AI models are similar IMO, and unlike fiction books are often clearly labeled as such, repeatedly. At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state. | | |
| ▲ | idle_zealot 6 days ago | parent [-] | | > At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state. You either think too highly of people, or too lowly of them. In any case, you're advocating for interning about 100 million individuals. | | |
| |
| ▲ | teiferer 6 days ago | parent | prev | next [-] | | > On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions. That's not an obvious conclusion. One could make the same argument with physical weapons. "Regulating weapons is a path to hell paved with good intentions. Yesterday it was assault rifles, today it's hand guns and tomorrow it's your kitchen knife they are coming for." Europe has strict laws on guns, but everybody has a kitchen knife and lots of people there don't feel they live in hell. The U.S. made a different choice, and I'm not arguing that it's worse there (though many do, Europeans and even Americans), but it's certainly not preventing a supposed hell that would have broken out had guns in private hands been banned. | |
| ▲ | 1718627440 6 days ago | parent | prev [-] | | That's why you have this: This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
If it would be fit for a purpose, then it's on the producer for ensuring it actually does. We have laws to prevent anyone from declaring their goods aren't fit for a particular purpose. |
|
|
|
| ▲ | harmonic18374 6 days ago | parent | prev | next [-] |
| I'm not sure, but there is a difference: the researchers don't have much incentive to get everyone to use their model. As such, they're not really the ones hyping up AI as the future while ignoring shortcomings. |
|
| ▲ | rideontime 6 days ago | parent | prev | next [-] |
| I specifically blame Sam Altman because of the allegations in the complaint that he ordered safety checks to be skipped in order to rush this model to market, specific safety checks that were later demonstrated to identify and prevent precisely this behavior. |
|
| ▲ | salawat 6 days ago | parent | prev [-] |
| You build the tool, you're culpable ultimately. I've made it a rule in my life to conduct myself as if I will be held to account for everything I ultimately build, and it's externalities. Helps keep my nose cleaner. Still managed to work on some things that keep me up at night though. |
| |
| ▲ | brabel 6 days ago | parent | next [-] | | That’s absolutely not how it works. Every license has a clause explicitly saying that the user is responsible for what they do with the tool. That’s just common sense. If it was the way you suggested no one would create tools for others anymore. If you buy the screw driver I sold and kill someone with it, I sure as hell have my conscience clean. In the ChatGPT case it’s different because the “tool” has the capacity to interact and potentially manipulate people psychologically, which is the only reason it’s not a clear cut case. | |
| ▲ | Pedro_Ribeiro 6 days ago | parent | prev | next [-] | | That's a slippery slope! By that logic, you could argue that the creators of Tor, torrenting, Ethereum, and Tornado Cash should be held accountable for the countless vile crimes committed using their technology. | | |
| ▲ | RyanHamilton 6 days ago | parent [-] | | Legally I think not being responsible is the right decision. Morally I would hope everyone considers if they themselves are even partially responsible. As I look round at young people today and the tablet holding, media consuming youth programmers have created in order to get rich via advertising. I wish morals would get considered more often. | | |
| ▲ | salawat 6 days ago | parent [-] | | This, right here, is why I take the stance I do. Too many ethical blank checks get written ultimately if you don't keep the moral stain in place. If you make a surveillance tool, release it to the world that didn't have that capacity, and a dictator picks it up and rolls with it, that license of yours may absolve you in a court of law, but in the eyes of Root, you birthed it. You made the problem tractable. Not all problems were meant to be so. I used to not care about it as much. The last decade though has sharply changed my views though. It may very well be a lesson only learned with sufficient time and experience. I made my choice though. There are things I will not make/make easier. I will not be complicit in knowingly forging the bindings of the future. Maybe if we mature as a society as someday, but that day is not today. |
|
| |
| ▲ | novok 6 days ago | parent | prev [-] | | So if you build a chair and then someone uses it to murder someone, are you responsible for the murder? | | |
| ▲ | salawat 4 days ago | parent | next [-] | | I am not willing to grant the same level of exculpation on the basis of reasonable forseeability to a software/AI dev that I am to a carpenter making a chair. The history of our trade has in it found great joy in how to use things in ways they were never intended, which much more than a carpenter, puts a burden on us to take the propensity and possibility of abuse into account amongst ourselves. So no. Mr. Altman, and the people who made this chair, are in part responsible. You aren't a carpenter. You had a responsibility to the public to constrain this thing, and to be as ahead of it as humanly possible, and the number of AI as therapist startups I've seen in the past couple years, even just as passion projects from juniors I've trained, have been met with the same guiding wisdom: go no further. You are critically outside your depth, and you are creating a clear and evident danger to the public you are not yet mentally or sufficiently with it enough to mitigate all the risks of. If I can get there it's pretty damn obvious. | |
| ▲ | mothballed 6 days ago | parent | prev [-] | | You're not even responsible if you build an AR-15, complete with a bunch of free advertisements from the US army using the select-fire variant to slaughter innocent Iraqis, and it's used to murder someone. The government will require them to add age controls and that will be that. |
|
|