Remix.run Logo
txrx0000 2 hours ago

> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.

What are those values that you're defending?

Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?

- 10 AIs running on 10 machines, each with 10 million GPUs

OR

- 10 million AIs running on 10 million machines, each with 10 GPUs

All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.

There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?

TOMDM an hour ago | parent | next [-]

Anthropic doesn't get to make that call though, if they tried the result would actually be:

8 AIs running on 8 machines each with 10 million GPUs

AND

2 million AIs running on 2 million machines, each with 10 GPU's

If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.

txrx0000 41 minutes ago | parent [-]

I think your scenario is already better, not worse. Those 8 agents will have a much harder time taking action when there are 2 million other pesky little agents that aren't aligned with them.

lebovic 38 minutes ago | parent | prev | next [-]

> What are those values that you're defending?

I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.

Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.

> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world

I think there's high existential risk in any of these situations when the AI is sufficiently powerful.

txrx0000 13 minutes ago | parent [-]

Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.

lebovic 6 minutes ago | parent [-]

Yeah, I think that's one way it could go!

I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.

ChadNauseam an hour ago | parent | prev | next [-]

> - 10 AIs running on 10 machines, each with 10 million GPUs > > OR > > - 10 million AIs running on 10 million machines, each with 10 GPUs

If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.

thelock85 an hour ago | parent | prev | next [-]

I think the path to the values you allude to includes affirming when flawed leaders take a stance.

Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).

SecretDreams an hour ago | parent | prev [-]

How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.

I don't think we can bank on all of humanity acting in humanity's best interests right now.

txrx0000 44 minutes ago | parent [-]

We can bank on people acting in self-interest. The nefarious actor will find themselves opposed by millions of others that are not aligned with them, so it would be much more difficult for them to do things. It's like being covered by ants. The average alignment of those ants is the average alignment of humanity.