Remix.run Logo
ferguess_k 10 hours ago

We don't really need AGI. We need better specialized AIs. Throw in a few specialized AIs and they will leave some impact in the society. That might not be that far away.

nightski 8 hours ago | parent | next [-]

Saying we don't "need" AGI is like saying we don't need electricity. Sure life existed before we had that capability, but it would be very transformative. Of course we can make specialized tools in the mean time.

hoosieree 5 hours ago | parent | next [-]

The error in this argument is that electricity is real.

mrandish 4 hours ago | parent [-]

Indeed, and I'd go even further. In addition to existing, electricity is also usefully defined - which helps greatly in establishing its existence. Neither unicorns nor AGI currently exist but at least unicorns are well enough defined to establish whether an equine animal is or isn't one.

charcircuit 7 hours ago | parent | prev [-]

Can you give an example how it would be transformative compared to specialized AI?

Jensson 7 hours ago | parent | next [-]

AGI is transformative in that it lets us replace knowledge workers completely, specialized AI requires knowledge workers to train them for new tasks while AGI doesn't.

fennecfoxy 6 hours ago | parent | prev [-]

Because it could very well exceed our capabilities beyond our wildest imaginations.

Because we evolved to get where we are, humans have all sorts of messy behaviours that aren't really compatible with a utopian society. Theft, violence, crime, greed - it's all completely unnecessary and yet most of us can't bring ourselves to solve these problems. And plenty are happy to live apathetically while billionaires become trillionaires...for what exactly? There's a whole industry of hyper-luxury goods now, because they make so much money even regular luxury is too cheap.

If we can produce AGI that exceeds the capabilities of our species, then my hope is that rather than the typical outcome of "they kill us all", that they will simply keep us in line. They will babysit us. They will force us all to get along, to ensure that we treat each other fairly.

As a parent teaches children to share by forcing them to break the cookie in half, perhaps AI will do the same for us.

hackinthebochs 2 hours ago | parent | next [-]

Why on earth would you want an AI that takes away our autonomy? It's wild to see someone actually advocate for this outcome.

johnb231 2 hours ago | parent [-]

There are people who enjoy being dominated, kept on a leash like a dog. Bad idea to transfer that fetish to human civilization.

ASI to humans would be like humans are to rats or ants.

It could stomp all over us to achieve whatever goals it chooses to accomplish.

Humans being cared for as pets would be a relatively benign outcome.

an hour ago | parent | prev | next [-]
[deleted]
davidivadavid 6 hours ago | parent | prev | next [-]

Oh great, can't wait for our AI overlords to control us more! That's definitely compatible with a "utopian society"*.

Funnily enough, I still think some of the most interesting semi-recent writing on utopia was done ~15 years ago by... Eliezer Yudkowsky. You might be interested in the article on "Amputation of Destiny."

Link: https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-th...

tirant 2 hours ago | parent | prev | next [-]

I still don’t see an issue of billionaires becoming trillionaires and being able to buy hyper luxury goods. Good for them and good for the people selling and manufacturing those goods. Meanwhile poverty is in all time lows and there’s a growing middle class at global level. Our middle class life conditions nowadays have a level of comfort that would get Kings from some centuries ago jealous.

rurp 3 hours ago | parent | prev | next [-]

Who on earth has the resources to create true AGI and is interested in using it to create this sort of utopia for the masses?

If AGI is created it is most likely to be guided by someone like Altman or Musk, people whose interests couldn't be farther from what you describe. They want to make themselves gods and couldn't care less about random plebs.

If AGI is setting its own principles then I fail to see why it would care about us at all. Maybe we'll be amusing as pets but I expect a superhuman intelligence will treat us like we treat ants.

brulard 2 hours ago | parent | prev [-]

Is this meant seriously? Do we really want something more intelligent than us to just force on us it's rules, logic and ways of living (or dying), which we may be too stupid to understand?

Karrot_Kream 3 hours ago | parent | prev | next [-]

I think to many AI enthusiasts, we're already at the "specialized AIs" phase. The question is whether those will jump to AGI. I'm personally unconvinced but I'm not an ML researcher so my opinion is colored by what I use and what I read, not active research. I do think though that many specialized AIs is already enough to experience massive economic disruption.

alickz 7 hours ago | parent | prev | next [-]

What if AGI is just a bunch of specialized AIs put together?

It would seem our own generalized intelligence is an emergent property of many, _many_ specialized processes

I wonder if AI is the same

Jensson 7 hours ago | parent [-]

> It would seem our own generalized intelligence is an emergent property of many, _many_ specialized processes

You can say that about other animals, but about humans it is not so sure. No animal can be taught as general set of skills as a human can, they might have some better specialized skills but clearly there is something special that makes humans so much more versatile.

So it seems there was this simple little thing humans got that makes them general, while for example our very close relatives the monkeys are not.

fennecfoxy 7 hours ago | parent | next [-]

Humans are the ceiling at the moment yes, but that doesn't mean the ceiling isn't higher.

Science is full of theories that are correct per our current knowledge and then subsequently disproven when research/methods/etc improves.

Humans aren't special, we are made from blood & bone, not magic. We will eventually build AGI if we keep at it. However unlike VCs with no real skills except having a lot of money™, I couldn't say whether this is gonna happen in 2 years or 2000.

Jensson 6 hours ago | parent [-]

Question was if cobbling together enough special intelligence creates general intelligence. Monkeys has a lot of special intelligence that our current AI models can't come close to, but still aren't seen as general intelligence like humans, so there is some little bit humans has that isn't just another special intelligence.

mike_ivanov 6 hours ago | parent | prev [-]

It may be a property of (not only of?) humans that we can generate specialized inner processes. The hardcoded ones stay, the emergent ones come and go. Intelligence itself might be the ability to breed new specialized mental processes on demand.

bluGill 10 hours ago | parent | prev | next [-]

Specialized AIs have been making an impact on society since at least the 1960s. AI has long suffered from every time they come up with something new it gets renamed and becomes important (where it makes sense) without giving AI credit.

From what I can tell most in AI are currently hoping LLMs reach that point quick just because the hype is not helping AI at all.

Workaccount2 9 hours ago | parent | next [-]

Yesterday my dad, in his late 70's, used Gemini with a video stream to program the thermostat. He then called me to tell me this, rather then call me to come stop by and program the thermostat.

You can call this hype, maybe it is all hype until LLMs can work on 10M LOC codebases, but recognize that LLMs are a shift that is totally incomparable to any previous AI advancement.

lexandstuff 2 hours ago | parent | next [-]

That is amazing. But I had a similar experience when I first taught my mum how to Google for computer problems. She called me up with delight to tell me how she fixed the printer problem herself, thanks to a Google search. In a way, LLMs are a refinement on search technology we already had.

orochimaaru 8 hours ago | parent | prev | next [-]

That is what open ai’s non-profit economic research arm has claimed. LLMs will fundamentally change how we interact with the world like the Internet did. It will take time like the Internet and a couple of hype cycle pops but it will change the way we do things.

It will help a single human do more in a white collar world.

https://arxiv.org/abs/2303.10130

bluefirebrand 7 hours ago | parent | prev | next [-]

> He then called me to tell me this, rather then call me to come stop by and program the thermostat.

Sounds like AI robbed you of an opportunity to spend some time with your Dad, to me

Workaccount2 2 hours ago | parent | next [-]

I'm there like twice a week don't worry. He knows about Gemini because I was showing him it two days before hah

TheGRS 4 hours ago | parent | prev | next [-]

For some of us that's a plus!

jabits 6 hours ago | parent | prev [-]

Or maybe instead of spending time with your dad on a bs menial task, you could spent time fishing with him…

bluefirebrand 6 hours ago | parent [-]

It's nice to think that but life and relationships are also composed of the little moments, which sometimes happen when someone asks you over to help with a "bs menial task"

It takes five minutes to program the thermostat, then you can have a beer on the patio if that's your speed and catch up for a bit

Life is little moments, not always the big commitments like taking a day to go fishing

That's the point of automating all of ourselves out of work, right? So we have more time to enjoy spending time with the people we love?

So isn't it kind of sad if we wind up automating those moments out of our lives instead?

ferguess_k 7 hours ago | parent | prev | next [-]

Yeah. As a mediocre programmer I'm really scared about this. I don't think we are very far from AI replacing the mediocre programmers. Maybe a decade, at most.

I'd definitely like to improve my skills, but to be realistic, most of the programmers are not top-notch.

bluGill 7 hours ago | parent | prev [-]

There are clearly a lot of useful things about LLMs. However there is a lot of hype as well. It will take time to separate the two.

BolexNOLA 10 hours ago | parent | prev | next [-]

Yeah “AI” tools (such a loose term but largely applicable) have been involved in audio production for a very long time. They have actually made huge strides with noise removal/voice isolation, auto transcription/captioning, and “enhancement” in the last five years in particular.

I hate Adobe, I don’t like to give them credit for anything. But their audio enhance tool is actual sorcery. Every competitor isn’t even close. You can take garbage zoom audio and make it sound like it was borderline recorded in a treated room/studio. I’ve been in production for almost 15 years and it would take me half a day or more of tweaking a voice track with multiple tools that cost me hundreds of dollars to get it 50% as good as what they accomplish in a minute with the click of a button.

danielbln 9 hours ago | parent | prev | next [-]

Bitter lesson applies here as well though. Generalized models will beat specialized models given enough time and compute. How much bespoke NLP is there anymore? Generalized foundational models will subsume all of it eventually.

johnecheck 9 hours ago | parent | next [-]

You misunderstand the bitter lesson.

It's not about specialized vs generalized models - it's about how models are trained. The chess engine that beat Kasparov is a specialized model (it only plays chess), yet it's the bitter lesson's example for the smarter way to do AI.

Chess engines are better at chess than LLMs. It's not close. Perhaps eventually a superintelligence will surpass the engines, but that's far from assured.

Specialized AI are hardly obsolete and may never be. This hypothetical superintelligence may even decide not to waste resources trying to surpass the chess AI and instead use it as a tool.

ses1984 9 hours ago | parent | prev [-]

Generalized models might be better but they are rarely more efficient.

ferguess_k 10 hours ago | parent | prev [-]

Yeah I agree with it. There is a lot of hype, but there is some potentials there.

babyent 6 hours ago | parent | prev [-]

Why not just hire like 100 of the smartest people across domains and give them SOTA AI, to keep the AI as accurate as possible?

Each of those 100 can hire teams or colleagues to make their domain better, so there’s always human expertise keeping the model updated.

trial3 6 hours ago | parent [-]

"just"

babyent 6 hours ago | parent [-]

They’re spending 10s of billions. Yes, just.

200 million to have dedicated top experts on hand is reasonable.