Remix.run Logo
blibble a day ago

> We’re navigating a tightrope as Superintelligence nears. If the West slows down unilaterally, China could dominate the 21st century.

I never understood this argument

as a non-USian: I'd prefer to be under the Chinese boot rather than having all of humanity under the boot of an AI

and it is certainly no reason to try to do everything we possibly can to try and summon a machine god

socalgal2 a day ago | parent | next [-]

> I'd rather be under the Chinese boot than having all of humanity under the boot of an AI

That is not the options being offered. The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?

> certainly no reason to try to increase the chance of summoning a machine god

The argument is that this is inevitable. If it's possible to make AGI someone will eventually do it. Does it matter who does it first? I don't know. Yes, making it happen faster might be bad. Waiting until someone else does it first might be worse.

blibble a day ago | parent | next [-]

> The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?

given Elon's AI is already roleplaying as hitler, and constructing scenarios on how to rape people, how much worse could the Chinese one be?

> The argument is that this is inevitable.

which is just stupid

we have the agency to simply stop

and certainly the agency to not try and do it as fast as we possibly can

mattnewton a day ago | parent | next [-]

> we have the agency to simply stop

This is worse than the prisoner’s dilemma- the “we get there, they don’t” is the highest payout for the decision makers who believe they will control the resulting super intelligence.

socalgal2 a day ago | parent | prev [-]

"We" do not as you can not control 8 billion people

blibble a day ago | parent [-]

it's certainly not that difficult to imagine international controls on fab/DC construction, enforced by the UN security council

there's even a previous example of controls of this sort at the nation state level: those for nuclear enrichment

(the cost to perform uranium enrichment is now less than building a state of the art fab...!)

as a nation state (not facebook): you're entitled to enrich, but only under the watchful eye of the IAEA

and if you violate, then the US tends to bunker-bust you

this paper has some ideas on how it might work: https://cdn.governance.ai/International_Governance_of_Civili...

hiAndrewQuinn a day ago | parent | prev | next [-]

If you financially penalize AI researchers, either with a large lump sum or in a way which scales with their expected future earnings, take you pick, and pay the proceeds to the people who put together the very cases which lead to the fines being levied, you can very effectively freeze AGI development.

If you don't think you can organize international cooperation around this you can simply put such people on some equivalent of an FBI type Most Wanted list and pay anyone who comes forward with information and maybe gets them within your borders as well. If a government chooses to wave its dick around like this it could easily cause other nations to copy the same law, this instilling a new global Nash equilibrium where this kind of scientific frontier research is verboten.

There's nothing inevitable at all about that. I hesitate to even call such a system extreme, because we already employ systems like this to intercept e g. high level financial conspiracies via things like the False Claims Act.

socalgal2 a day ago | parent [-]

In my world there are multiple countries who each have an incentive to win this race. I know of no world where you can penalize AI researchers across international boundaries nor to believe your scenario could ever play out. You're dreaming if you think you could actually get all the players to co-operate on this. It's like expecting the world to come together on climate change. It's not happening and it's not going to happen.

Further, it doesn't take a huge lab to do it. You can do it at home. It might take longer but there's an 1.4kg blob in everyone's head as proof of concept and does not take a data center.

blibble a day ago | parent | next [-]

> I know of no world where you can penalize AI researchers across international boundaries nor to believe your scenario could ever play out.

mossad could certainly do it

hiAndrewQuinn 16 hours ago | parent | prev [-]

>I know of no world where you can penalize AI researchers across international boundaries nor to believe your scenario could ever play out.

This world where wars and stuff happen? You can just use actual force, you know.

>You're dreaming if you think you could actually get all the players to co-operate on this.

Let me play it out more explicitly since you lack the eyes to see. Say China implements such a rule. Top researchers in the US start mysteriously turning up in China and getting turned in for the cash fine. They aren't allowed to leave China until they empty their bank accounts, because they have broken a very serious Chinese law.

Now the US has to seriously ask itself whether it's actually willing to unleash World War 3 for the sake of a couple thousand eggheads. Not gonna happen.

Then someone takes a poll and finds this actually has mass support _in_ the US, because the average person is terrified of where AI is going. If you can't beat em, join em. The US implements its own law similar to the one in China so that this madness can stop. All of the top researchers bitch and moan and move to Portugal.

Top researchers in Portugal start mysteriously turning up in the US... So on and so forth. Rinse and request until everyone is on board with the new Nash equilibrium.

>[I]t doesn't take a huge lab to do it. You can do it at home. It might take longer but there's an 1.4kg blob in everyone's head as proof of concept.

You are thinking so unbelievably small. So, so unbelievably small.

You decide you want to push the state of AI forward and create an enormous new model. Okay, you're going to need to buy or source a bunch of GPUs. You try to order a bunch of GPUs from Nvidia. Nvidia flags you as a high risk client, turns on the tracking firmware, gathers info on your training runs, hires some people in your local area to peek through your curtains to confirm, and turns you in themselves. Nvidia walks away getting roughly half, the other half went to its contractors. You are out of the game.

Alright, so maybe you need to buy a bunch of GPUs on the secondhand market to get started. And maybe you need to source them one or two at a time from several dozen, or hundreds of different players to get away with it. A few of these players take an interest in watching you, notice you're buying GPUs from several secondhand sources, realize this would be extremely weird if you weren't planning on doing something illegal with them, and put together a small posse to turn you in. Your life savings are emptied out and your condo repossessed. You are out of the game.

> It might take longer

Thousands of years longer by my reckoning, at minimum. It's very hard to design an AGI entirely inside your own head when the outside world is so inherently adversarial.

MangoToupe a day ago | parent | prev [-]

> The options are under the boot of a Western AI or a Chinese AI.

This seems more like fear-mongering than based on any kind of reasoning I've been able to follow. China tends to keep control of its industry, unlike the US, where industry tends to control the state. I emphatically trust the chinese state more than out own industry.

a day ago | parent | prev [-]
[deleted]