Remix.run Logo
narrator a day ago

I read political books from the 70s and everything is the same. It's been the same since the mid-60s. That's when the western narrative shifted from technological progress to environmentalism and the lawyer took over from the engineer as the prime mover in society. That was 60 years ago. 60 years before that was 1900 and the world was vastly vastly different compared to our world of the 1960s. 60 years before that was 1840 and the world was vastly vastly different.

I'm thinking that AI,robots and the rise of China is going to change things radically. Human labor will not be an economic constraint, but that won't lead to unlimited abundance because the constraint will be externalities.

Most of the technologically unemployed will wake up and do whatever AI tells them to do on a daily basis. Their lives will improve because AI is better than they are naturally at everything. This will lead to some weird outcomes. Especially if AI is not acting in the interest of each individual, but in the interests of the collective. This will cause AI to have to solve trolley problems.

mbgerring a day ago | parent | next [-]

I work in the energy industry. Any future containing widespread use of AI will require a hard electrical infrastructure upgrade equivalent to the initial deployment of electricity and phone lines. The intersection of AI and the electrical grid is a hard and as-yet-unsolved problem. Either way, power infrastructure will drive our destiny as much or more than AI.

derriz a day ago | parent | next [-]

Why? The transmission system is used to span distance to bring energy from large producers to large concentrations of consumers. Traditionally power consumption was concentrated in and around cities while power generation happened away from cities.

Datacenters require relatively few people to operate so do not need to be located in or near population centers. Sites are chosen on this basis - DCs are sited close to generation or significant transmission system nodes

taink a day ago | parent | prev | next [-]

Even if we did manage to achieve such an upgrade, we would still have to successfully manage to secure the rare earths required for electronics manufacturing. Extracting and processing these resources is becoming more and more complex. Especially when you consider we would need these resources not only to sustain our current infrastructure, but also to improve it.

judahmeek a day ago | parent | prev [-]

So basically China has the infrastructure & raw materials for properly utilizing AI & America doesn't.

I really wonder if America will wake up because China crushes us under their feet. I kind of doubt it.

We beat the USSR due to their style of government being absolutely terrible. China's form of authoritarianism has proven far more adaptable. Not to mention that America's governance is showing risks of sliding towards corrupt authoritarianism as well. If both forms of government suck from an idealistic perspective, then China's manufacturing, rare earth metals, growing naval capacity, experience in stealing IP, & energy infrastructure seem to give it the advantage.

The only thing that I think that America has going for it right now is possibly control of space through SpaceX.

oldpersonintx2 21 hours ago | parent [-]

[dead]

0x696C6961 a day ago | parent | prev | next [-]

This narrative implies a benevolent AI.That is a naive assumption.

narrator a day ago | parent | next [-]

Even a benevolent AI acting for the benefit of a collective will have to choose which individuals suffer when suffering by some members of the collective becomes unavoidable.

eru a day ago | parent | next [-]

Maybe. But a sufficiently smart benevolent AI will avoid getting into such a hopeless situation in the first place.

Just like parents in rich countries don't constantly have to decide which of their kids should go hungry: they make sure ahead of time to buy enough food to feed every family member.

throw10920 a day ago | parent | prev [-]

When would "suffering by some members of the collective becomes unavoidable" actually happen?

SilverSlash a day ago | parent | prev | next [-]

The human 'benevolence factor' has gone up throughout history as we've advanced and become more civilized. If AI is even more advanced than us then why is it naive to assume it will be more benevolent than us?

strgcmc a day ago | parent | next [-]

The most apt way that I've read somewhere, to reason about AI, is to treat it like an extremely foreign, totally alien form of intelligence. Not necessarily that the models of today behave like this, but we're talking about the future aren't we?

Just framing your question against a backdrop of "human benevolence", as well as implying this is a single dimension (that it's just a scalar value that could be higher or lower), is already too biased. You assume that logic which applies to humans, can be extrapolated to AI. There is not much basis for this assumption, in much the same way that there is not much basis to assume an alien sentient gas cloud from Andromeda would operate on the same morals or concept of benevolence as us.

0x696C6961 a day ago | parent | prev [-]

Humans are still in direct control of the training/alignment.

wood_spirit a day ago | parent [-]

A handful of billionaires are in direct control of the West’s training/alignment. Then there are some sheiks in the Middle East and the communist party in China…

This is a tangent but i personally dream of the EU doing a university led effort to make a benign AI. Because it is the last crumbling bastion of liberal democracy.

dmje a day ago | parent | next [-]

Not sure that benign or alignment is that easy. I mean, as frequent authors have pointed out - I have a very much benign attitude towards ants. I don’t step on them if I can help it and I don’t maliciously go out to pour boiling water on them. But if I’m building a house or working in my garden I’m likely gonna kill tens of thousands of them. Same applies to AGI. If we’re just ants, we’re gonna get squashed.

anonzzzies a day ago | parent | prev | next [-]

If an AI can live-learn (so like we do at night, fine tuning our neural net weights etc), which we need to get anywhere from here (just no-one knows how yet), there is nothing currently that can make that alignment stick; humans drop out of alignment all the time for self preservation or just 'everyone does it, so...'.

Ray20 20 hours ago | parent | prev [-]

At the moment, the US looks much more democratic and liberal than the EU.

oezi 19 hours ago | parent [-]

From the outside the US has shifted to a oligarchy where money buys elections. Europe's democracies are certainly straining. Primarily from its news companies being minimized by Google and Facebook (and now TikTok) which have extracted most ad revenues on which news depended.

ctoth 13 hours ago | parent [-]

> From the outside the US has shifted to a oligarchy where money buys elections.

The data simply doesn't support that narrative.

Looking at the last 4 presidential elections:

2024: Trump won, Harris outspent him ($1.9B vs $1.6B)

2020: Biden won, Biden outspent Trump ($1.06B vs $785M)

2016: Trump won, Clinton outspent him ($614M vs $368M)

2012: Obama won, Obama outspent Romney (~$1.1B vs ~$1B, essentially tied)

The higher spender won twice and lost twice. 2016 is particularly striking - Clinton outspent Trump by roughly $200-450 million depending on how you count it, yet lost.

Ekaros 13 hours ago | parent [-]

Why are Democrat candidates consistently outspending Republican candidates? I thought the Republicans were the party for rich? And thus should be getting more money from the rich.

Ray20 12 hours ago | parent [-]

> I thought the Republicans were the party for rich?

Isn't it the other way around? I mean, in the Internet, it's the democratic side that's constantly complaining about how stupid, uneducated rednecks elected dictator Trump.

boznz a day ago | parent | prev | next [-]

Agree. Self-preservation is any thinking entities #1 goal. We may give an AI power, data and keep it repaired, but we can also turn it off or reprogram it. We probably shouldn't assume higher level 'thinking' AI's will be benevolent. Luckily, current LLM's are not thinking entities, just token completion machines.

alganet a day ago | parent | prev | next [-]

I have a radical hypothesis that intelligence leads to empathy, empathy leads to kindness, and a superinteligent AI should be kinder than any human has ever been.

I also believe that as soon as someone boots up an AI that is kind, they'll kill it immediately, for the reason of it being kind, favoring instead the dumb AI that can follow orders.

lll-o-lll an hour ago | parent | next [-]

That seems incredibly naive. There are many examples of extremely intelligent people with psychopathy or narcissism. Also empathy does not lead to kindness by default; it is used as a tool by the most sadistic.

drekipus a day ago | parent | prev [-]

Genuine intelligence is kindness. But ai is recall and pattern recognition.

I generally sum it up as "ai doesn't have the human spirit" and ergo it will not have a moral compass

alganet a day ago | parent [-]

I was talking in fiction terms with a hint of philosophy. You're doing more of a techno-mix between current LLMs and religion, which is definitely interesting, but disconnected from what I said.

KPGv2 a day ago | parent | prev [-]

The narrative implies GAI. It's looking increasingly impossible. Nearly a decade of trying to improve on the concept of neural nets. Utter failure. Now we're running up on both the limits of training data (not much more to slurp) and physical laws (miniaturization has a threshold beyond which it cannot go, and we're getting there).

So, at least in the medium term, AI is going to stall out at approximately where it is now: good at predicting the next word token.

xyzzy123 a day ago | parent | prev | next [-]

I wonder about this; theoretically elites who control capital no longer "need" masses in such numbers to retain power/wealth. It would be much simpler to manage a smaller population and extract surplus production from technocapital instead (automated factories, solar, ag etc).

If you are mainly constrained by externalities of production / industrialisation one way to maximise the resources available to you is to have fewer other people.

alexashka a day ago | parent | next [-]

Are you quoting the rationale for China's one child policy?

You misunderstand what the elites do. They prevent change because the status quo has been setup by their parents and grand-parents to benefit them at the expense of everyone else already.

They are not agents of change, they are agents of preventing change.

xyzzy123 a day ago | parent [-]

Ok, my definition of "elites" is they are the people who wrest control of the systems that sustain us all and bend them towards extracting value for themselves. They're the people who live up on the hill and extract grain from the peasants at the point of a sword. It's generational.

Peasants are not very productive and you need a lot of them, and you're continually running the risk that they're going to revolt or want a better deal.

Under conditions of wider stability I absolutely agree with you that in general "elites" want to slow or block change. The system is rigged to support them already and change is risky. When there is significant external competition (threat of war or impending social change that would overturn their control), I believe it turns out to be surprising what can be done...

If automation can replace labour as the main productive input, the "masses" and welfare seem largely redundant and significant degrowth might be seen as preferable.

I am not claiming this is pre-ordained or a definite outcome, I am saying that this line of reasoning seems plausible to me.

The tipping point would seem to be where the marginal return on investment in capital (automation, AI, machines) exceeds the marginal return on investment in humans (labor, welfare, training, etc.).

alexashka 12 hours ago | parent | next [-]

Automation replaced labor decades ago.

A 32 hour work week was suggested by USA politicians almost a century ago. Read that again - a century ago.

You're trying to reduce complex human issues to a single metric of input vs output. I struggle to convey why this is um, not smart without being insulting.

xyzzy123 7 hours ago | parent [-]

I agree neither I nor my ideas are very smart.

Think like Zuck on a private island. The thing that matters to you is economic output that is available for you to direct and consume. From a certain point of view, everything else is just resources spent for no benefit - inefficiency. Naturally you need some other people because a) the unit of human survival is a community and b) status remains the ultimate good and that is unlikely to change.

My working definition of AGI is when humans and robots become roughly fungible.

The structure of the economy seriously changes with the introduction of a robot slave class.

You can cut out the middleman in terms of production. At a certain point it stops making sense to think about things in terms of money because robots don't need to be paid. You have to think about the inputs you have and the goods and services you want them to produce for you.

It's pretty rational to start asking questions about "what's the desired human/robot ratio" under these circumstances.

The only political questions that matter become "who controls the robots" and "who controls the land", but perhaps they become the same question.

2 hours ago | parent [-]
[deleted]
a day ago | parent | prev [-]
[deleted]
KPGv2 18 hours ago | parent | prev [-]

If the population shrinks, their capital isn't worth much. Meta, Twitter, etc. all lose value when the user base shrinks, we've literally seen it already. If the population gets smaller by design, naturally this same thing would happen.

Amazon, Uber, owners of apartment complexes, commercial real estate titans, Fox News, etc. What do their powerful owners/managers do? Rupert Murdoch's family doesn't think "if only our viewership dropped by 90% we'd really be doing great!"

Elites are where they are because the current system has benefited them. They wouldn't want to risk that by shaking things up so dramatically.

xyzzy123 7 hours ago | parent | next [-]

All those things you described are proxies to control resources and produce security. The money, the attention, the influence etc. Yes, structurally those things become less important if there are fewer people.

But it doesn't matter! People are unpredictable and difficult. You don't meet most of them anyway!

You don't need to think about money in the same way anymore if you can produce the goods and services you need directly, using land, productive capital and robots that are approximately as capable as humans.

7 hours ago | parent | prev [-]
[deleted]
Yoric 21 hours ago | parent | prev | next [-]

> Most of the technologically unemployed will wake up and do whatever AI tells them to do on a daily basis. Their lives will improve because AI is better than they are naturally at everything. This will lead to some weird outcomes. Especially if AI is not acting in the interest of each individual, but in the interests of the collective. This will cause AI to have to solve trolley problems.

Let's assume one second that AI becomes good enough to do that.

There's still a strong possibility that AI will be a tool acting in the interest of an elite, smaller (a few oligarchs, a single dictator) or larger (a country, a faction, a religion).

erichocean 16 hours ago | parent | prev [-]

> I'm thinking that AI,robots and the rise of China

If AI employees and robots rise, China will fall along with every other human-powered economy.

No human is competitive with an AI employee in cost or efficiency, and the gap (technology is deflationary) will increase every year.