Remix.run Logo
ben_w 17 hours ago

There's a few levels of this:

• That it is software means that any given model can be easily ordered nationalised or whatever.

• Everyone quickly copying OpenAI, and specifically DeepSeek more recently, showed that once people know what kind of things actually work, it's not too hard to replicate it.

• We've only got a handful of ideas about how to align* AI with any specific goal or value, and a lot of ways it does go wrong. So even if every model was put into public ownership, it's not going to help, not yet.

That said, if the goal is to give everyone access to an AI that demands 375 W/capita 24/7, means the new servers double the global demand for electricity, with all that entails.

* Last I heard (a while back now so may have changed): if you have two models, there isn't even a way to rank them as more-or-less aligned vs. anything. Despite all the active research in this area, we're all just vibing alignment, corporate interests included.

ijk 7 hours ago | parent [-]

Public control over AI models is a distinct thing from everyone having access to an AI server (not that national AI would need a 1:1 ratio of servers to people, either).

It's pretty obvious that the play right now is to lock down the AI as much as possible and use that to facilitate control over every system it gets integrated with. Right now there's too many active players to shut out random developers, but there's an ongoing trend of companies backing away from releasing open weight models.

ben_w 6 hours ago | parent [-]

> It's pretty obvious that the play right now is to lock down the AI as much as possible and use that to facilitate control over every system it gets integrated with. Right now there's too many active players to shut out random developers, but there's an ongoing trend of companies backing away from releasing open weight models.

More the opposite, despite the obvious investment incentive to do as you say to have any hope of a return on investment. OpenAI *tried* to make that a trend with GPT-2 on the grounds that it's irresponsible to give out a power tool in the absence of any idea of what "safety tests" even mean in that context, but lots of people mocked them for it and it looks like only them and Anthropic take such risks seriously. Or possibly just Anthropic, depending how cynical you are about Altman.