| ▲ | bondarchuk 7 hours ago |
| >Historical texts contain racism, antisemitism, misogyny, imperialist views. The models will reproduce these views because they're in the training data. This isn't a flaw, but a crucial feature—understanding how such views were articulated and normalized is crucial to understanding how they took hold. Yes! >We're developing a responsible access framework that makes models available to researchers for scholarly purposes while preventing misuse. Noooooo! So is the model going to be publicly available, just like those dangerous pre-1913 texts, or not? |
|
| ▲ | DGoettlich 2 hours ago | parent | next [-] |
| fully understand you. we'd like to provide access but also guard against misrepresentations of our projects goals by pointing to e.g. racist generations. if you have thoughts on how we should do that, perhaps you could reach out at history-llms@econ.uzh.ch ? thanks in advance! |
| |
| ▲ | myrmidon an hour ago | parent | next [-] | | What is your worst-case scenario here? Something like a pop-sci article along the lines of "Mad scientists create racist, imperialistic AI"? I honestly don't see publication of the weights as a relevant risk factor, because sensational misrepresentation is trivially possible with the given example responses alone. I don't think such pseudo-malicious misrepresentation of scientific research can be reliably prevented anyway, and the disclaimers make your stance very clear. On the other hand, publishing weights might lead to interesting insights from others tinkering with the models. A good example for this would be the published word prevalence data (M. Brysbaert et al @Ghent University) that led to interesting follow-ups like this: https://observablehq.com/@yurivish/words I hope you can get the models out in some form, would be a waste not to, but congratulations on a fascinating project regardless! | |
| ▲ | superxpro12 44 minutes ago | parent | prev | next [-] | | Perhaps you could detect these... "dated"... conclusions and prepend a warning to the responses? IDK. I think the uncensored response is still valuable, with context. "Those who cannot remember the past are condemned to repeat it" sort of thing. | |
| ▲ | andy99 39 minutes ago | parent | prev [-] | | You’re going to get more negative attention not releasing them than you would releasing them. And to be clear, it will be trivial attention in all cases, a few low-ranking comments on here or reddit or twitter. This is not a high value target for anyone. And even engaging in such discussions really detracts from the work. Take the high road and just ignore it. |
|
|
| ▲ | p-e-w 6 hours ago | parent | prev [-] |
| It’s as if every researcher in this field is getting high on the small amount of power they have from denying others access to their results. I’ve never been as unimpressed by scientists as I have been in the past five years or so. “We’ve created something so dangerous that we couldn’t possibly live with the moral burden of knowing that the wrong people (which are never us, of course) might get their hands on it, so with a heavy heart, we decided that we cannot just publish it.” Meanwhile, anyone can hop on an online journal and for a nominal fee read articles describing how to genetically engineer deadly viruses, how to synthesize poisons, and all kinds of other stuff that is far more dangerous than what these LARPers have cooked up. |
| |
| ▲ | physicsguy 6 hours ago | parent | next [-] | | > It’s as if every researcher in this field is getting high on the small amount of power they have from denying others access to their results. I’ve never been as unimpressed by scientists as I have been in the past five years or so. This is absolutely nothing new. With experimental things, it's non uncommon for a lab to develop a new technique and omit slight but important details to give them a competitive advantage. Similarly in the simulation/modelling space it's been common for years for researchers to not publish their research software. There's been a lot of lobbying on that side by groups such as the Software Sustainability Institute and Research Software Engineer organisations like RSE UK and RSE US, but there's a lot of researchers that just think that they shouldn't have to do it, even when publicly funded. | | |
| ▲ | p-e-w 3 hours ago | parent [-] | | > With experimental things, it's non uncommon for a lab to develop a new technique and omit slight but important details to give them a competitive advantage. Yes, to give them a competitive advantage. Not to LARP as morality police. There’s a big difference between the two. I take greed over self-righteousness any day. | | |
| ▲ | physicsguy 2 hours ago | parent [-] | | I’ve heard people say that they’re not going to release their software because people wouldn’t know how to use it! I’m not sure the motivation really matters more than the end result though. |
|
| |
| ▲ | paddleon an hour ago | parent | prev | next [-] | | > “We’ve created something so dangerous that we couldn’t possibly live with the moral burden of knowing that the wrong people (which are never us, of course) might get their hands on it, so with a heavy heart, we decided that we cannot just publish it.” Or, how about, "If we release this as is, then some people will intentionally mis-use it and create a lot of bad press for us. Then our project will get shut down and we lose our jobs" Be careful assuming it is a power trip when it might be a fear trip. I've never been as unimpressed by society as I have been in the last 5 years or so. | |
| ▲ | patapong 3 hours ago | parent | prev [-] | | I think it's more likely they are terrified of someone making a prompt that gets the model to say something racist or problematic (which shouldn't be too hard), and the backlash they could receive as a result of that. | | |
| ▲ | isolli 17 minutes ago | parent | next [-] | | Is it a base model, or did it get some RLHF on top? Releasing a base model is always dangerous. The French released a preview of an AI meant to support public education, but they released the base model, with unsurprising effects [0] [0] https://www.leparisien.fr/high-tech/inutile-et-stupide-lia-g... (no English source, unfortunately, but the title translates as: "“Useless and stupid”: French generative AI Lucie, backed by the government, mocked for its numerous bugs") | |
| ▲ | p-e-w 3 hours ago | parent | prev [-] | | Is there anyone with a spine left in science? Or are they all ruled by fear of what might be said if whatever might happen? | | |
| ▲ | ACCount37 an hour ago | parent | next [-] | | Selection effects. If showing that you have a spine means getting growth opportunities denied to you, and not paying lip service to current politics in grant applications means not getting grants, then anyone with a spine would tend to leave the field behind. | |
| ▲ | paddleon an hour ago | parent | prev [-] | | maybe they are concerned by the widespread adoption of the attitude you are taking-- make a very strong accusation, then when it was pointed out that the accusation might be off base, continue to attack. This constant demonization of everyone who disagrees with you, makes me wonder if 28 Days wasn't more true than we thought, we are all turning into rage zombies. p-e-w, I'm reacting to much more than your comments. Maybe you aren't totally infected yet, who knows. Maybe you heal. I am reacting to the pandemic, of which you were demonstrating symptoms. |
|
|
|