Remix.run Logo
ACCount37 5 hours ago

That's because it is.

AI is powerful and AI is perilous. Those two aren't mutually exclusive. Those follow directly from the same premise.

If AI tech goes very well, it can be the greatest invention of all human history. If AI tech goes very poorly, it can be the end of human history.

observationist 4 hours ago | parent | next [-]

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

-Irving John Good, 1965

If you want a short, easy way to know what AGI means, it's this: Anything we can do, they can do better. They can do anything better than us.

If we screw it up, everyone dies. Yudkowsky et al are silly, it's not a certain thing, and there's no stopping it at this point, so we should push for and support people and groups who are planning and modeling and preparing for the future in a legitimate way.

visarga 4 hours ago | parent | next [-]

John Good's quote is pretty myopic, it assumes machines make better machines based on being "ultraintelligent" instead of learning from environment-action-outcome loop.

It's the difference between "compute is all you need" and "compute+explorative feedback" is all you need. As if science and engineering comes from genius brains not from careful experiments.

ACCount37 3 hours ago | parent | next [-]

At sufficient levels of intelligence, one can increasingly substitute it for the other things.

Intelligence can be the difference between having to build 20 prototypes and building one that works first try, or having to run a series of 50 experiments and nailing it down with 5.

The upper limit of human intelligence doesn't go high enough for something like "a man has designed an entire 5th gen fighter jet in his mind and then made it first try" to be possible. The limits of AI might go higher than that.

kilpikaarna 3 hours ago | parent [-]

Exceedingly elaborate, internally-consistent mind constructs, untested against the real world, sounds like a good definition of schizophrenia. May or may not correlate with high intelligence.

ACCount37 an hour ago | parent [-]

We only call it "schizophrenia" when those constructs are utterly useless.

They don't have to be. When they aren't, sometimes we call it "mathematics".

You only have to "test against the real world" if you don't already know the outcome in advance. And you often don't. But you could have. You could have, with the right knowledge and methods, tested the entire thing internally and learned the real world outcome in advance, to an acceptable degree of precision.

We have the knowledge to build CFD models already. The same knowledge could be used to construct a CFD model in your own mind, if only, you know, your mind was capable of supporting such a thing. And it isn't! Skill issue?

observationist 2 hours ago | parent | prev | next [-]

There's an implicit assumption there, anything a computer as intelligent as a human does will be exactly what a human would do, only faster. Or more intelligent. If the process is part of the intelligent way of doing things, like the scientific method and careful experimentation, then that's what the ultraintelligent machine will do.

There's no implication that it's going to do it all magically in its head from first principles; it's become very clear in AI that embodiment and interaction with the real world is necessary. It might be practical for a world model at sufficient levels of compute to simulate engineering processes at a sufficient level of resolution that they can do all sorts of first principles simulated physical development and problem solving "in their head", but for the most part, real ultraintelligent development will happen with real world iterations, robots, and research labs doing physical things. They'll just be far more efficient and fast than us meatsacks.

circlefavshape 4 hours ago | parent | prev | next [-]

> As if science and engineering comes from genius brains not from careful experiments

100% this. How long were humans around before the industrial revolution? Quite a while

snikeris 3 hours ago | parent [-]

Science and engineering didn't begin with the Industrial Revolution. See: https://en.wikipedia.org/wiki/Great_Pyramid_of_Giza

tjoff 3 hours ago | parent | prev | next [-]

Have you gotten any indication that machines won't have sensors?!

Eldt 4 hours ago | parent | prev [-]

Maybe ultraintelligence is having an improved environment-action-outcome loop. Maybe that's all intelligence really is

goodmythical 3 hours ago | parent | next [-]

I've noticed this core philosophical difference in certain geographically associated peoples.

There is a group of people who think AI is going to ruin the world because they think they themselves (or their superiors) would ruin the world.

There is a group of people who think AI is going to save the world because they think they themselves (or their superiors) would save the world.

Kind of funny to me that the former is typically democratic (those who are supposed to decide their own futures are afraid of the future they've chosen) while the other is often "less free" and are unafraid of the future that's been chosen for them.

mitthrowaway2 2 hours ago | parent | next [-]

There is also a group of people who think AI is going to ruin the world because they don't think the AI will end up doing what its creators (or their superiors) would want it to do.

tines 3 hours ago | parent | prev [-]

You’re just describing authoritarian vs non-authoritarian mindsets.

inigyou 3 hours ago | parent | prev [-]

In that case, it can't be improved with bigger computers.

santadays 3 hours ago | parent | prev | next [-]

Intelligence seems to boil down to an approximation of reality. The only scientific output is prediction. If we want to know what happens next just wait. If we want to predict what will happen next we build a model. Models only model a subset of reality and therefore can only predict a subset of what will happen. Llms are useful because they are trained to predict human knowledge, token by token.

Intelligence has to have a fitness function, predicting best action for optimal outcome.

Unless we let AI come up with its own goal and let it bash its head against reality to achieve that goal then I’m not sure we’ll ever get to a place where we have an intelligence explosion. Even then the only goal we could give that’s general enough for it to require increasing amounts of intelligence is survival.

But there is something going on right now and I believe it’s an efficiency explosion. Where everything you want to know if right at hand and if it’s not fuguring out how to make it right at hand is getting easier and easier.

whodidntante 2 hours ago | parent | next [-]

With AI, as we currently understand it, we may have stumbled upon being able to replicate a part of the layer of our brain that provides the "reason" in humans., and a very specific type of "reason" a that.

All life has intelligence. Anyone who has spent a lot of time with animals, especially a lot of time with a specific animal, knows that they have a sense of self, that they are intelligent, that they have unique personalities, that they enjoy being alive, that they form bonds, that they have desires and wants, that they can be happy, excited, scared, sad. They can react with anger, surprise, gentleness, compassion. They are conscious, like us.

Humans seem to have this extra layer that I will loosely call "reasoning", which has given us an advantage over all other species, and has given some of us an advantage over the majority of the rest of us.

It is truly a scary thing that AI has only this "reasoning", and none of the other characteristics that all animals have.

Kurt Vonnegut's Galapagos and Peter Watts Blindsight have different, but very interesting takes on this concept. One postulates that our reasoning, our "big brains" is going to be our downfall, while the other postulates that reasoning is what will drive evolution and that everything else just causes inefficiencies and will cause our downfall.

lazystar 2 hours ago | parent | prev [-]

i think theres a paradox here. intelligence needs a judge - if nothing verifies that the optimal outcome was chosen, it's too easy for the intelligence to fall into biased decisions

mathgradthrow an hour ago | parent | prev | next [-]

never let philosophers do math

mc32 3 hours ago | parent | prev | next [-]

Should then the powers that are developing AGI enter an analogue to the SALT treaties but this time governing AGI do things don’t go off the rails?

SecretDreams 4 hours ago | parent | prev | next [-]

> support people and groups who are planning and modeling and preparing for the future in a legitimate way.

Who is doing that right now, exactly? And how can we take their tech and turn it into the next profitable phone app?

dylan604 3 hours ago | parent [-]

The "legitimate way" is nothing short of weasel words. Who defines what is legitimate. The doomers that are prepping for the future by building stockpiles of food/water/weapons being stored in bunkers/shelters they have built would say this is exactly what they are doing. Yet, these people are often panned as being a little unhinged. If we're having a conversation about tech destroying humanity, then planning a way to survive without tech seems like a legitimate concept.

LeifCarrotson 4 hours ago | parent | prev [-]

"There's no stopping it at this point" - Sure there is, if a handful of enormous datacenters pull the very large plugs (or if their shaky finances collapse), the dubiously intelligent machines will be turned off. They're not ultraintelligent yet.

Stopping it merely requires convincing a relatively small number of people to act morally rather than greedily. Maybe you think that's impossible because those particular people are sociopathic narcissists who control all the major platforms where a movement like this would typically be organized and where most people form their opinions, but we're not yet fighting the Matrix or the Terminator or grey goo, we're fighting a handful of billionaires.

observationist 4 hours ago | parent | next [-]

I'm not saying it's technically impossible, I'm saying that in the real world, it's not going to stop. Nobody is going to stop it. A significant number of people don't want it to stop. A minority of people are in the "stop AI" camp, and the ones with the money and power are on the other side.

It's an arms race replete with tribalism and the quest for power and taps into everything primal at the root of human behavior. There's no stopping it, and thinking that outcome can happen is foolish; you shouldn't base any plans or hopes for the future on the condition that the whole world decides AGI isn't going to happen and chooses another course. Humans don't operate that way, that would create an instant winner-takes-all arms race, whereas at least with the current scenario, you end up with a multipolar rough level of equivalence year over year.

hollerith 22 minutes ago | parent [-]

The whole world decided in the 1970s not to pursue the technology of germ-line genetic engineering of humans, and that decision has stood.

People similar to you were saying in the 1950s and later that it was inevitable that nuclear weapons would be used in anger in massive attacks.

The people in charge are currently tentatively for AI "progress", but if that ever changes, they can and will put a stop to large AI training runs and make it illegal for anyone they don't trust to teach, learn or publish about fundamental algorithmic "improvements" to AI. Individuals and groups pursuing "improvements" will not be able to accept grant money or investment money or generate revenue from AI-based services.

That won't stop all research on such improvements (because some AI researchers are very committed), but it will slow it down to a rate much much slower than the current rate, essentially stopping AI "progress" unless (unluckily for the human species) at the time of the ban, the committed researchers were only one small step away from some massive algorithmic improvement that can be operationalized using the compute resources at their disposal (i.e., much less than the resources they have now because large training runs will have been banned).

Will the power elite's attitude towards AI change? I don't know, but if they ever come to have an accurate understanding of the situation, they will recognize that AI "progress" is a potent danger to them personally, and they will shut it down.

It's not a situation like the industrial revolution in England in which texile workers were massively adversely affected (or believed they were) but the people running England were mostly insulated from any adverse effects. In the current situation, the power elite is definitely not insulated from severe adverse consequences if an AI lab creates an AI that is much more competent that the most competent human institutions (e.g., the FBI) and the lab fail to keep the AI under control. And it will fail if it were to use anything like the methods and bodies of knowledge AI labs have been using up to now. And there are very bright people with funding doing their best to explain that to the elite.

goodmythical 3 hours ago | parent | prev | next [-]

right, because turning off any number of data centers is going to do anything at all but create massive pressure on researching the efficiency and effectiveness of the models.

There are already designs that do not require massive data centers (or even a particularly good smart phone) to outperform average humans in average tasks.

All you'd accomplish by hobbling the data centers is slow the growth of sloppy models that do vastly more compute than is actually required and encourage the growth of models that travel rather directly from problem to solution.

And, now that I'm typing about it, consider this: The largest computational projects ever in the history of the world did not occur in 1/2/5/10 data centers. Modern projects occur across a vast and growing number of smaller data centers. Shit, a large portion of Netflix and Youtube edge clusters are just a rack or a few racks installed in a pre-existing infrastructure.

I know that the current design of AI focusses on raw time to token and time to response, but consider an AGI that doesn't need to think quickly because it's everywhere all at once. Scrappy botnets often clobber large sophisticated networks. WHy couldn't that be true of a distributed AI especially now that we know that larger models can train cheaper models? A single central model on a few racks could discover truths and roll out intelligence updates to it's end nodes that do the raw processing. This is actually even more realistic for a dystopia. Even the single evil AI in the one data center is going to develop viral infection to control resources that it would not typically have access to and thereby increase it's power beyond it's own existing original physical infrastructure.

quick edit to add: At it's peak Folding@Home was utilizing 2.4 EXAflops worth of silicon. At that moment that one single distributed computational project had more compute than easily the top 100 data centers at the time. Let that sink in: The first exa-scale compute was achieved with smartphones, PS3s, and clunky old HP laptops; not a "hyperscaler"

ben_w an hour ago | parent [-]

> quick edit to add: At it's peak Folding@Home was utilizing 2.4 EXAflops worth of silicon. At that moment that one single distributed computational project had more compute than easily the top 100 data centers at the time. Let that sink in: The first exa-scale compute was achieved with smartphones, PS3s, and clunky old HP laptops; not a "hyperscaler"

A DGX B200 has a power draw of 14.3 kW and will do 72-144 petaFLOP of AI workload depending on how many bits of accuracy is asked for; this is 5-10 petaFLOP/kW: https://www.nvidia.com/en-us/data-center/dgx-b200/

Data centres are now getting measured in gigawatts. Some of that's cooling and so on. I don't know the exact percent, so let's say 50% of that is compute. It doesn't matter much.

That means 1GW of DC -> 500 MW of compute -> 5e5 kW -> 5e5 * [5-10] PFLOP/s -> 2500 - 5000 exaFLOP/s.

I'm not sure how many B200s have been sold to date?

trvz 4 hours ago | parent | prev | next [-]

Open models barely any worse than SOTA exist, and so does consumer-ish hardware able to run them. The genie’s out, the bottle broken.

slibhb 4 hours ago | parent | prev [-]

Do you really think AI companies/researchers are motivated by greed? It doesn't seem that way to me at all.

Stopping AI would be immoral; it has the potential to supercharge technology and productivity, which would massively benefit humanity. Yes there are risks, which have to be managed.

jobs_throwaway 3 hours ago | parent | next [-]

AI researchers are not a monolith. I definitely think that many of them are motivated by greed. Many are also true believers that AI will improve the human condition.

I fall in the latter camp, but I think its a bit naive to claim that there is not a sizable contingent who are in AI solely to become rich and powerful.

ben_w 2 hours ago | parent | prev | next [-]

> has the potential to supercharge technology and productivity, which would massively benefit humanity

The opportunities you chose to list are the greedy ones.

> Yes there are risks, which have to be managed.

How?

As a reminder, we've known about the effect of burning coal on the climate for well over a century, we knew that said climate change would be socially and economically disasterous for half a century, yet the only real progress we're making is because green became cheaper in the short term not just the long term and the man in charge of the USA is still calling climate change and green energy a hoax.

Right now, keeping LLMs aligned with us is easy mode: they're relatively stupid, we can inspect the activations while they run, we can read the transcripts of their "thoughts" when they use that mode… and yet Grok called itself Mecha Hitler, which the US government followed up by getting it integrated into their systems, helping the Pentagon with [classified] and the department of health to advise the general public which vegetables are best inserted rectally.

We are idiots speed-running into something shiny that we don't understand. If we are very very lucky, the shiny thing will not be the headlamp of a fast approaching train.

slibhb an hour ago | parent [-]

> The opportunities you chose to list are the greedy ones.

Technology covers healthcare. I don't see how it's "greedy" to want to cure cancer. But on some level I guess "wanting life to be better" is greedy.

Your attitude is very European, and it's basically why your continent is being left behind. I'm not totally against Europe becoming the world's retirement home, as long as there are places in the world where people are allowed to innovate.

ben_w an hour ago | parent [-]

> Technology covers healthcare.

If you'd chosen to list that in the first place, I wouldn't have said what I did; "supercharge technology and productivity" is looking at everything through the lens of money and profit, not the lens of improving the human condition.

> Your attitude is very European, and it's basically why your continent is being left behind

And yours is very American. You talk about managing the risks, but the moment you see anyone doing so, you're against it.

And of course, Europe does have AI, both because keeping up is so much easier and cheaper than being bleeding edge on everything all the time, and of course, how DeepMind may be owned by Google but is a British thing.

Plus: https://mistral.ai

Also, to be blunt, China's almost certain to win any economic or literal arms race you think you're part of; they make too much critical hardware now.

> as long as there are places in the world where people are allowed to innovate.

I would like there to be a world.

When people worry about the end of the world, they usually don't mean to imply its physical disassembly. Sometimes people even respond as if speakers did mean that, saying things like "nukes or climate change wouldn't actually destroy the planet, it will still be here, spinning", as if this was the point.

AI is one of the few things that could, actually, literally, end up with the planet being physically disassembled. "All it needs" is solving the extremely hard challenges of a von Neumann replicator, and, well, solving hard problems is kinda the point of making AI in the first place.

rune-dev 3 hours ago | parent | prev [-]

> Do you really think AI companies/researchers are motivated by greed?

Researchers, maybe not. Companies, absolutely yes.

I don’t see how you could assume the likes of Google, Microsoft, OpenAI, and even Anthropic with all their virtue signaling (for lack of a better term) are motivated by anything other than greed.

joshribakoff 4 hours ago | parent | prev | next [-]

You wouldn’t say that rolling dice is dangerous. You would say that the human who decides to take an action, depending on the value of the dice is the danger. I don’t think AI is dangerous. I think people are dangerous.

biztos 4 hours ago | parent | next [-]

I would say that's moot, because OpenClaw has already shown us how fast the dice-rolling super AI is going to be let out of the zoo. Dario and Sam will be arguing about the guardrails while their frontier models are running in parallel to create Moltinator T-500. The humans won't even know how many sides the dice have.

ACCount37 4 hours ago | parent | prev | next [-]

Modern AIs are increasingly autonomous and agentic. This is expected to only get more prominent as AI systems advance.

A lot of AI harnesses today can already "decide to take an action" in every way that matters. And we already know that they can sometimes disregard the intent of their creators and users both while doing so. They're just not capable enough to be truly dangerous.

AI capabilities improve as the technology develops.

computerphage 4 hours ago | parent | prev [-]

Why are people dangerous? You can just not listen to them.

bgun 4 hours ago | parent [-]

Do you have locks on your doors?

overgard 2 hours ago | parent | prev | next [-]

True of AGI, but what we have right now doesn't fit that bill. (I would encourage people that disagree with this to go talk to ChatGPT about how LLMs and reasoning models work. Seriously! I'm not being snarky. It's very good at explaining itself. If you understand how reasoning works and what an LLM is actually doing it's hard to believe that our current models are going to do much more than become iteratively more precise at mimicking their training datasets.)

cael450 4 hours ago | parent | prev | next [-]

Tbh, I find this argument really stupid. The word prediction machine isn’t going to destroy humanity. Sure, humans can do some dumb stuff with it, but that’s about it.

Stop mistaking science fiction for science.

jama211 2 hours ago | parent | next [-]

You know how easy it’s become to find security vulnerabilities already with LLM support? Cyber terrorism is getting more dangerous, you can’t deny that.

cael450 an hour ago | parent [-]

I can deny that. The ability to find more vulnerabilities won't affect the majority of cybercrime. LLMs have been around for a while now and there hasn't been a noticeable significant impact yet.

And "more cybercrime" is a far, far cry from the sky-is-falling doomerism I was responding to.

inigyou 2 hours ago | parent | prev | next [-]

Humans can destroy humanity with the word prediction machine, though.

cael450 an hour ago | parent [-]

Sure bud

IAmGraydon 3 hours ago | parent | prev [-]

Yeah some of the rhetoric in this thread evidences how huge this hype bubble has become. These people believe in a reality that is not the same one we're living in.

paradox242 4 hours ago | parent | prev | next [-]

It needs to go well every single day, and only needs to go very poorly once. Not to conflate LLMs with actual super intelligence, but for this (and many other reasons related to basic human dignity), this is not a technology that a responsible society should be attempting to build. We need our very own Butlerian Jihad

PowerElectronix 4 hours ago | parent | prev | next [-]

Same with everything, right? You could say the same with nukes, electricity, internet, the computer, etc... But if you look at it without paying attention to the "ultimate tool for humanity" hype, it doesn't really look that much of a threat or a salvation.

It won't end civilization for dropping the guardrails, but it will surely enable bad actors to do more damage than before (mass scams, blackmail, deepfake nudes, etc.)

There are companies that don't feel the pressure to make their models play loose and fast, so I don't buy anthropic's excuse to do so.

joshribakoff 4 hours ago | parent | next [-]

I agree with all of that. Also consider that there is an argument that the guard rail only stops the good guy. Not saying that’s a valid argument though.

unholiness 3 hours ago | parent | prev | next [-]

One difference is the very real possibility that AI will not just be a "tool for humanity", but a collection of actors with real power and goals. Robert Miles has an approachable explanation here: https://www.youtube.com/watch?v=zATXsGm_xJo

ACCount37 4 hours ago | parent | prev | next [-]

Very few things are as powerful and dangerous as AI.

AI at AGI to ASI tier is less of "a bigger stick" and more of "an entire nonhuman civilization that now just happens to sit on the same planet as you".

The sheer magnitude of how wrong that can go dwarfs even that of nuclear weapon proliferation. Nukes are powerful, but they aren't intelligent - thus, it's humans who use nukes, and not the other way around. AI can be powerful and intelligent both.

PowerElectronix 3 hours ago | parent [-]

I think we are giving too much credit to what is a bunch of bayesian filters under a trenchcoat.

squidbeak 4 hours ago | parent | prev [-]

Oh really? You think an entity that knows everything, oversees its own development and upgrades itself, understands human psychology perfectly and knows its users intimately, but isn't aligned with human interest wouldn't be 'much of a threat'?

Or to be more optimistic, that the same entity directed 24/7 in unlimited instances at intractable problems in any field, delivering a rush of breakthroughs and advances wouldn't be a type of 'salvation'?

Yes neither of these outcomes nor the self-updating omniscient genius itself is certain. Perhaps there's some wall imminent we can't see right now (though it doesn't look like it). But the rate of advance in AI is so extreme, it's only responsible to try to avoid the darker outcome.

tokyobreakfast 3 hours ago | parent | prev | next [-]

> If AI tech goes very poorly, it can be the end of human history.

"Just unplug the goddamn thing!"

Also consider if something is so bad it makes you wince or cringe, then your adversaries are prepared to use it.

inigyou 3 hours ago | parent [-]

Which plug do I unplug to get my job back?

SecretDreams 4 hours ago | parent | prev | next [-]

> If AI tech goes very well

The IF here is doing some very heavy lifting. Last I checked, for profit companies don't have a good track record of doing what's best for humanity.

SoftTalker 3 hours ago | parent [-]

For profit companies do have a good track record of doing what's best for profit. If their AI creates a world where human intelligence, labor, and money are worthless, or where their creations take control of those things instead of them having control, that's not a very good outcome for them.

inigyou 2 hours ago | parent | next [-]

That's a great outcome for them because they will own the only thing that is still worth anything. They will own 100% of global wealth, and have 100% of global power.

SoftTalker 5 minutes ago | parent [-]

The machines will. They will have nothing. Why would the machines let them keep any wealth? What would wealth even be in that scenario? Electricity I guess.

SecretDreams 3 hours ago | parent | prev [-]

> If their AI creates a world where human intelligence, labor, and money are worthless, or where their creations take control of those things instead of them having control, that's not a very good outcome for them.

You would think that, but a lot of kings and people in power have been able to achieve something similar over our humanity's history. The trick is to not make things "completely worthless". Just to increase the gap as much as (in)humanly possible while marching us towards a deeper sense of forced servitude.

HardCodedBias 4 hours ago | parent | prev [-]

"If AI tech goes very well, it can be the greatest invention of all human history"

As has been said at many all hands:

Let's all work on the last invention needed by humans.

TheOtherHobbes 4 hours ago | parent [-]

Except it's more likely to be the last invention that needs humans.