| ▲ | txrx0000 2 hours ago | ||||||||||||||||
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.) I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute. What are those values that you're defending? Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk? - 10 AIs running on 10 machines, each with 10 million GPUs OR - 10 million AIs running on 10 million machines, each with 10 GPUs All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race. There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface? | |||||||||||||||||
| ▲ | TOMDM an hour ago | parent | next [-] | ||||||||||||||||
Anthropic doesn't get to make that call though, if they tried the result would actually be: 8 AIs running on 8 machines each with 10 million GPUs AND 2 million AIs running on 2 million machines, each with 10 GPU's If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one. | |||||||||||||||||
| |||||||||||||||||
| ▲ | lebovic 38 minutes ago | parent | prev | next [-] | ||||||||||||||||
> What are those values that you're defending? I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values. Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context. > Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world I think there's high existential risk in any of these situations when the AI is sufficiently powerful. | |||||||||||||||||
| |||||||||||||||||
| ▲ | ChadNauseam an hour ago | parent | prev | next [-] | ||||||||||||||||
> - 10 AIs running on 10 machines, each with 10 million GPUs > > OR > > - 10 million AIs running on 10 million machines, each with 10 GPUs If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x. | |||||||||||||||||
| ▲ | thelock85 an hour ago | parent | prev | next [-] | ||||||||||||||||
I think the path to the values you allude to includes affirming when flawed leaders take a stance. Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger). | |||||||||||||||||
| ▲ | SecretDreams an hour ago | parent | prev [-] | ||||||||||||||||
How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell. I don't think we can bank on all of humanity acting in humanity's best interests right now. | |||||||||||||||||
| |||||||||||||||||