| ▲ | hn_throwaway_99 a day ago |
| > Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal? OK, say I totally believe this. What, pray tell, are we supposed to do about it? Don't you at least see the irony of quoting Sama's dire warnings about the development of AI, without at least mentioning that he is at the absolute forefront of the push to build this technology that can destroy all of humanity. It's like he's saying "This potion can destroy all of humanity if we make it" as he works faster and faster to figure out how to make it. I mean, I get it, "if we don't build it, someone else will", but all of the discussion around "alignment" seems just blatantly laughable to me. If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence? While I'm skeptical on the timeline, if we do ever end up building super intelligence, the idea that we can control it is a pipe dream. We may not be toast (I mean, we're smarter than dogs, and we keep them around), but we won't be in control. So if you truly believe super intelligent AI is coming, you may as well enjoy the view now, because there ain't nothing you or anyone else will be able to do to "save humanity" if or when it arrives. |
|
| ▲ | ctoth 9 hours ago | parent | next [-] |
| I love this pattern, the oldest pattern. There is nothing happening! The thing that is happening is not important! The thing that is happening is important, but it's too late to do anything about it! Well, maybe if you had done something when we first started warning about this... See also: Covid/Climate/Bird Flu/the news. |
|
| ▲ | reducesuffering 2 hours ago | parent | prev | next [-] |
| > If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence? That's exactly what the true AGI X-Riskers think! Sama acknowledges the intense risk but thinks the path forward is inevitable anyway so hoping that building intelligence will give them the intelligence to solve alignment. The other camp, a la Yudkowsky, believe it's futile to just hope it gets solved without AGI capabilities first becoming more intelligent, powerful, and disregarding any of our wishes. And then we've ceded any control of our future to an uncaring system that treats us as a means to achieve its original goals like how an ant is in the way of a Google datacenter. I don't see how anyone who thinks "maybe stock number go up as your only goal is not the best way to make people happy", can miss this. |
| |
| ▲ | hollerith 2 hours ago | parent [-] | | Slightly more detail: until about 2001 Yudkowsky was what we would now call an AI accelerationist, then it dawned on him that creating an AI that is much "better at reality" than people are would probably kill all the people unless the AI has been carefully designed to stay aligned with human values (i.e., to want what we want) and that ensuring that it stays aligned is a very thorny technical problem, but was still hopeful that humankind would solve the thorny problem. He worked full time on the alignment problem himself. In 2015 he came to believe that the alignment problem is so hard that it is very very unlikely to be solved by the time it is needed (namely, when the first AI is deployed that is much "better at reality" than people are). He went public with his pessimism in Apr 2022, and his nonprofit (the Machine Intelligence Research Institute) fired most of its technical alignment researchers and changed its focus to lobbying governments to ban the dangerous kind of AI research. |
|
|
| ▲ | achierius a day ago | parent | prev [-] |
| Political organization to force a stop to ongoing research? Protest outside OAI HQ? There are lots of thing we could, and many of us would, do if more people were actually convinced their life were in danger. |
| |
| ▲ | hn_throwaway_99 a day ago | parent [-] | | > Political organization to force a stop to ongoing research? Protest outside OAI HQ? Come on, be real. Do you honestly think that would make a lick of difference? Maybe, at best, delay things by a couple months. But this is a worldwide phenomenon, and humans have shown time and time again that they are not able to self organize globally. How successful do you think that political organization is going to be in slowing China's progress? | | |
| ▲ | achierius 19 hours ago | parent | next [-] | | Humans have shown time and time again that they are able to self-organize globally. Nuclear deterrence -- human cloning -- bioweapon proliferation -- Antarctic neutrality -- the list goes on. > How successful do you think that political organization is going to be in slowing China's progress? I wish people would stop with this tired war-mongering. China was not the one who opened up this can of worms. China has never been the one pushing the edge of capabilities. Before Sam Altman decided to give ChatGPT to the world, they were actively cracking down on software companies (in favor of hardware & "concrete" production). We, the US, are the ones who chose to do this. We started the race. We put the world, all of humanity, on this path. > Do you honestly think that would make a lick of difference? I don't know, it depends. Perhaps we're lucky and the timelines are slow enough that 20-30% of the population loses their jobs before things become unrecoverable. Tech companies used to warn people not to wear their badges in public in San Francisco -- and that was what, 2020? Would you really want to work at "Human Replacer, Inc." when that means walking out and about among a population who you know hates you, viscerally? Or if we make it to 2028 in the same condition. The Bonus Army was bad enough -- how confident are you that the government would stand their ground, keep letting these labs advance capabilities, when their electoral necks were on the line? This defeatism is a self-fulfilling prophecy. The people have the power to make things happen, and rhetoric like this is the most powerful thing holding them back. | | |
| ▲ | eagleislandsong 18 hours ago | parent [-] | | > China was not the one who opened up this can of worms Thank you. As someone who lives in Southeast Asia (and who also has lived in East Asia -- pardon the deliberate vagueness, for I do not wish to reveal too many potentially personally identifying information), this is how many of us in these regions view the current tensions between China and Taiwan as well. Don't get me wrong; we acknowledge that many Taiwanese people want independence, that they are a people with their own aspirations and agency. But we can also see that the US -- and its European friends, which often blindly adopt its rhetoric and foreign policy -- is deliberately using Taiwan as a disposable pawn to attempt to provoke China into a conflict. The US will do what it has always done ever since the post-WW2 period -- destabilise entire regions of countries to further its own imperialistic goals, causing the deaths and suffering of millions, and then leaving the local populations to deal with the fallout for many decades after. Without the US intentionally stoking the flames of mutual antagonism between China and Taiwan, the two countries could have slowly (perhaps over the next decades) come to terms with each other, be it voluntary reunification or peaceful separation. If you know a bit of Chinese history, it is not entirely far-fetched at all to think that the Chinese might eventually agree to recognising Taiwan as an independent nation, but now this option has now been denied because the US has decided to use Taiwan as a pawn in a proxy conflict. To anticipate questions about China's military invasion of Taiwan by 2027: No, I do not believe it will happen. Don't believe everything the US authorities claim. |
| |
| ▲ | ctoth 9 hours ago | parent | prev [-] | | We're all gonna die but come on, who wants to stop that! |
|
|