| ▲ | johnb231 13 hours ago |
| From the notes: "A reality check for people that think full embodied AGI is right around the corner is to ask your dancing humanoid robot to pick up a joystick and learn how to play an obscure video game." |
|
| ▲ | AndrewKemendo 8 hours ago | parent | next [-] |
| This debate is exhausting because there's no coherent definition of AGI that people agree on. I made a google form question for collecting AGI definitions cause I don't see anyone else doing it and I find it infinitely frustrating the range of definitions for this concept: https://docs.google.com/forms/d/e/1FAIpQLScDF5_CMSjHZDDexHkc... My concern is that people never get focused enough to care to define it - seems like the most likely case. |
| |
| ▲ | johnb231 3 hours ago | parent | next [-] | | The Wikipedia article on AGI explains it well enough. Researchers at Google have proposed a classification scheme with multiple levels of AGI. There are different opinions in the research community. https://arxiv.org/abs/2311.02462 | |
| ▲ | bigyabai 8 hours ago | parent | prev | next [-] | | It is a marketing term. That's it. Trying to exhaustively define what AGI is or could be is like trying to explain what a Happy Meal is. At it's core, the Happy Meal was not invented to revolutionize food eating. It puts an attractive label on some mediocre food, a title that exists for the purpose of advertisement. There is no point collecting definitions for AGI, it was not conceived as a description for something novel or provably existent. It is "Happy Meal marketing" but aimed for adults. | | |
| ▲ | AndrewKemendo 4 hours ago | parent | next [-] | | That’s historically inaccurate My masters thesis advisor Ben Goertzel popularized the term and has been hosting the AGI conference since 2008: https://agi-conference.org/ https://goertzel.org/agiri06/%5B1%5D%20Introduction_Nov15_PW... I had lunch with Yoshua Bengio at AGI 2014 and it was most of the conversation that day | |
| ▲ | HarHarVeryFunny 4 hours ago | parent | prev | next [-] | | The name AGI (i.e. generalist AI) was originally intended to contrast with narrow AI which is only capable of one, or a few, specific narrow skills. A narrow AI might be able to play chess, or distinguish 20 breeds of dog, but wouldn't be able to play tic tac toe because it wasn't built for that. AGI would be able to learn to do anything, within reason. The term AGI is obviously used very loosely with little agreement to it's precise definition, but I think a lot of people take it to mean not only generality, but specifically human-level generality, and human-level ability to learn from experience and solve problems. A large part of the problem with AGI being poorly defined is that intelligence itself is poorly defined. Even if we choose to define AGI as meaning human-level intelligence, what does THAT mean? I think there is a simple reductionist definition of intelligence (as the word is used to refer to human/animal intelligence), but ultimately the meaning of words are derived from their usage, and the word "intelligence" is used in 100 different ways ... | |
| ▲ | johnb231 3 hours ago | parent | prev [-] | | Generalization is a formal concept in machine learning and is measurable. |
| |
| ▲ | mvkel 8 hours ago | parent | prev [-] | | It doesn't really seem like there's much utility in defining it. It's like defining "heaven." It's an ideal that some people believe in, and we're perpetually marching towards it | | |
| ▲ | theptip 7 hours ago | parent | next [-] | | No, it’s never going to be precise but it’s important to have a good rough definition. Can we just use Morris et al and move on with our lives? Position: Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/html/2311.02462v4 There are generational policy and societal shifts that need to be addressed somewhere around true Competent AGI (50% of knowledge work tasks automatable). Just like climate change, we need a shared lexicon to refer to this continuum. You can argue for different values of X but the crucial point is if X% of knowledge work is automated within a decade, then there are obvious risks we need to think about. So much of the discourse is stuck at “we will never get to X=99” when we could agree to disagree on that and move on to considering the x=25 case. Or predict our timelines for X and then actually be held accountable for our falsifiable predictions, instead of the current vide based discussions. | |
| ▲ | 7 hours ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | ferguess_k 9 hours ago | parent | prev | next [-] |
| We don't really need AGI. We need better specialized AIs. Throw in a few specialized AIs and they will leave some impact in the society. That might not be that far away. |
| |
| ▲ | nightski 7 hours ago | parent | next [-] | | Saying we don't "need" AGI is like saying we don't need electricity. Sure life existed before we had that capability, but it would be very transformative. Of course we can make specialized tools in the mean time. | | |
| ▲ | hoosieree 4 hours ago | parent | next [-] | | The error in this argument is that electricity is real. | | |
| ▲ | mrandish 3 hours ago | parent [-] | | Indeed, and I'd go even further. In addition to existing, electricity is also usefully defined - which helps greatly in establishing its existence. Neither unicorns nor AGI currently exist but at least unicorns are well enough defined to establish whether an equine animal is or isn't one. |
| |
| ▲ | charcircuit 7 hours ago | parent | prev [-] | | Can you give an example how it would be transformative compared to specialized AI? | | |
| ▲ | Jensson 6 hours ago | parent | next [-] | | AGI is transformative in that it lets us replace knowledge workers completely, specialized AI requires knowledge workers to train them for new tasks while AGI doesn't. | |
| ▲ | fennecfoxy 6 hours ago | parent | prev [-] | | Because it could very well exceed our capabilities beyond our wildest imaginations. Because we evolved to get where we are, humans have all sorts of messy behaviours that aren't really compatible with a utopian society. Theft, violence, crime, greed - it's all completely unnecessary and yet most of us can't bring ourselves to solve these problems. And plenty are happy to live apathetically while billionaires become trillionaires...for what exactly? There's a whole industry of hyper-luxury goods now, because they make so much money even regular luxury is too cheap. If we can produce AGI that exceeds the capabilities of our species, then my hope is that rather than the typical outcome of "they kill us all", that they will simply keep us in line. They will babysit us. They will force us all to get along, to ensure that we treat each other fairly. As a parent teaches children to share by forcing them to break the cookie in half, perhaps AI will do the same for us. | | |
| ▲ | 39 minutes ago | parent | next [-] | | [deleted] | |
| ▲ | hackinthebochs an hour ago | parent | prev | next [-] | | Why on earth would you want an AI that takes away our autonomy? It's wild to see someone actually advocate for this outcome. | | |
| ▲ | johnb231 40 minutes ago | parent [-] | | There are people who enjoy being dominated, kept on a leash like a dog. Bad idea to transfer that fetish to human civilization. ASI to humans would be like humans are to rats or ants. It could stomp all over us to achieve whatever goals it chooses to accomplish. Humans being cared for as pets would be a relatively benign outcome. |
| |
| ▲ | davidivadavid 5 hours ago | parent | prev | next [-] | | Oh great, can't wait for our AI overlords to control us more! That's definitely compatible with a "utopian society"*. Funnily enough, I still think some of the most interesting semi-recent writing on utopia was done ~15 years ago by... Eliezer Yudkowsky. You might be interested in the article on "Amputation of Destiny." Link: https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-th... | |
| ▲ | tirant an hour ago | parent | prev | next [-] | | I still don’t see an issue of billionaires becoming trillionaires and being able to buy hyper luxury goods. Good for them and good for the people selling and manufacturing those goods. Meanwhile poverty is in all time lows and there’s a growing middle class at global level. Our middle class life conditions nowadays have a level of comfort that would get Kings from some centuries ago jealous. | |
| ▲ | brulard 43 minutes ago | parent | prev | next [-] | | Is this meant seriously? Do we really want something more intelligent than us to just force on us it's rules, logic and ways of living (or dying), which we may be too stupid to understand? | |
| ▲ | rurp 2 hours ago | parent | prev [-] | | Who on earth has the resources to create true AGI and is interested in using it to create this sort of utopia for the masses? If AGI is created it is most likely to be guided by someone like Altman or Musk, people whose interests couldn't be farther from what you describe. They want to make themselves gods and couldn't care less about random plebs. If AGI is setting its own principles then I fail to see why it would care about us at all. Maybe we'll be amusing as pets but I expect a superhuman intelligence will treat us like we treat ants. |
|
|
| |
| ▲ | Karrot_Kream 2 hours ago | parent | prev | next [-] | | I think to many AI enthusiasts, we're already at the "specialized AIs" phase. The question is whether those will jump to AGI. I'm personally unconvinced but I'm not an ML researcher so my opinion is colored by what I use and what I read, not active research. I do think though that many specialized AIs is already enough to experience massive economic disruption. | |
| ▲ | alickz 6 hours ago | parent | prev | next [-] | | What if AGI is just a bunch of specialized AIs put together? It would seem our own generalized intelligence is an emergent property of many, _many_ specialized processes I wonder if AI is the same | | |
| ▲ | Jensson 6 hours ago | parent [-] | | > It would seem our own generalized intelligence is an emergent property of many, _many_ specialized processes You can say that about other animals, but about humans it is not so sure. No animal can be taught as general set of skills as a human can, they might have some better specialized skills but clearly there is something special that makes humans so much more versatile. So it seems there was this simple little thing humans got that makes them general, while for example our very close relatives the monkeys are not. | | |
| ▲ | fennecfoxy 6 hours ago | parent | next [-] | | Humans are the ceiling at the moment yes, but that doesn't mean the ceiling isn't higher. Science is full of theories that are correct per our current knowledge and then subsequently disproven when research/methods/etc improves. Humans aren't special, we are made from blood & bone, not magic. We will eventually build AGI if we keep at it. However unlike VCs with no real skills except having a lot of money™, I couldn't say whether this is gonna happen in 2 years or 2000. | | |
| ▲ | Jensson 5 hours ago | parent [-] | | Question was if cobbling together enough special intelligence creates general intelligence. Monkeys has a lot of special intelligence that our current AI models can't come close to, but still aren't seen as general intelligence like humans, so there is some little bit humans has that isn't just another special intelligence. |
| |
| ▲ | mike_ivanov 6 hours ago | parent | prev [-] | | It may be a property of (not only of?) humans that we can generate specialized inner processes. The hardcoded ones stay, the emergent ones come and go. Intelligence itself might be the ability to breed new specialized mental processes on demand. |
|
| |
| ▲ | bluGill 9 hours ago | parent | prev | next [-] | | Specialized AIs have been making an impact on society since at least the 1960s. AI has long suffered from every time they come up with something new it gets renamed and becomes important (where it makes sense) without giving AI credit. From what I can tell most in AI are currently hoping LLMs reach that point quick just because the hype is not helping AI at all. | | |
| ▲ | Workaccount2 8 hours ago | parent | next [-] | | Yesterday my dad, in his late 70's, used Gemini with a video stream to program the thermostat. He then called me to tell me this, rather then call me to come stop by and program the thermostat. You can call this hype, maybe it is all hype until LLMs can work on 10M LOC codebases, but recognize that LLMs are a shift that is totally incomparable to any previous AI advancement. | | |
| ▲ | lexandstuff an hour ago | parent | next [-] | | That is amazing. But I had a similar experience when I first taught my mum how to Google for computer problems. She called me up with delight to tell me how she fixed the printer problem herself, thanks to a Google search. In a way, LLMs are a refinement on search technology we already had. | |
| ▲ | orochimaaru 8 hours ago | parent | prev | next [-] | | That is what open ai’s non-profit economic research arm has claimed. LLMs will fundamentally change how we interact with the world like the Internet did. It will take time like the Internet and a couple of hype cycle pops but it will change the way we do things. It will help a single human do more in a white collar world. https://arxiv.org/abs/2303.10130 | |
| ▲ | bluefirebrand 6 hours ago | parent | prev | next [-] | | > He then called me to tell me this, rather then call me to come stop by and program the thermostat. Sounds like AI robbed you of an opportunity to spend some time with your Dad, to me | | |
| ▲ | Workaccount2 40 minutes ago | parent | next [-] | | I'm there like twice a week don't worry. He knows about Gemini because I was showing him it two days before hah | |
| ▲ | TheGRS 3 hours ago | parent | prev | next [-] | | For some of us that's a plus! | |
| ▲ | jabits 5 hours ago | parent | prev [-] | | Or maybe instead of spending time with your dad on a bs menial task, you could spent time fishing with him… | | |
| ▲ | bluefirebrand 5 hours ago | parent [-] | | It's nice to think that but life and relationships are also composed of the little moments, which sometimes happen when someone asks you over to help with a "bs menial task" It takes five minutes to program the thermostat, then you can have a beer on the patio if that's your speed and catch up for a bit Life is little moments, not always the big commitments like taking a day to go fishing That's the point of automating all of ourselves out of work, right? So we have more time to enjoy spending time with the people we love? So isn't it kind of sad if we wind up automating those moments out of our lives instead? |
|
| |
| ▲ | ferguess_k 6 hours ago | parent | prev | next [-] | | Yeah. As a mediocre programmer I'm really scared about this. I don't think we are very far from AI replacing the mediocre programmers. Maybe a decade, at most. I'd definitely like to improve my skills, but to be realistic, most of the programmers are not top-notch. | |
| ▲ | bluGill 7 hours ago | parent | prev [-] | | There are clearly a lot of useful things about LLMs. However there is a lot of hype as well. It will take time to separate the two. |
| |
| ▲ | BolexNOLA 9 hours ago | parent | prev | next [-] | | Yeah “AI” tools (such a loose term but largely applicable) have been involved in audio production for a very long time. They have actually made huge strides with noise removal/voice isolation, auto transcription/captioning, and “enhancement” in the last five years in particular. I hate Adobe, I don’t like to give them credit for anything. But their audio enhance tool is actual sorcery. Every competitor isn’t even close. You can take garbage zoom audio and make it sound like it was borderline recorded in a treated room/studio. I’ve been in production for almost 15 years and it would take me half a day or more of tweaking a voice track with multiple tools that cost me hundreds of dollars to get it 50% as good as what they accomplish in a minute with the click of a button. | |
| ▲ | danielbln 8 hours ago | parent | prev | next [-] | | Bitter lesson applies here as well though. Generalized models will beat specialized models given enough time and compute. How much bespoke NLP is there anymore? Generalized foundational models will subsume all of it eventually. | | |
| ▲ | johnecheck 8 hours ago | parent | next [-] | | You misunderstand the bitter lesson. It's not about specialized vs generalized models - it's about how models are trained. The chess engine that beat Kasparov is a specialized model (it only plays chess), yet it's the bitter lesson's example for the smarter way to do AI. Chess engines are better at chess than LLMs. It's not close. Perhaps eventually a superintelligence will surpass the engines, but that's far from assured. Specialized AI are hardly obsolete and may never be. This hypothetical superintelligence may even decide not to waste resources trying to surpass the chess AI and instead use it as a tool. | |
| ▲ | ses1984 8 hours ago | parent | prev [-] | | Generalized models might be better but they are rarely more efficient. |
| |
| ▲ | ferguess_k 9 hours ago | parent | prev [-] | | Yeah I agree with it. There is a lot of hype, but there is some potentials there. |
| |
| ▲ | babyent 6 hours ago | parent | prev [-] | | Why not just hire like 100 of the smartest people across domains and give them SOTA AI, to keep the AI as accurate as possible? Each of those 100 can hire teams or colleagues to make their domain better, so there’s always human expertise keeping the model updated. | | |
|
|
| ▲ | vonneumannstan 7 hours ago | parent | prev | next [-] |
| Is this supposed to be a gotcha? We know these systems are typically trained using RL and they are exceedingly good at learning games... |
| |
| ▲ | johnb231 4 hours ago | parent [-] | | No it is not a “gotcha” and I don’t understand how you got that impression. Carmack believes AGI systems should be able to learn new tasks in realtime alongside humans in the real world. |
|
|
| ▲ | throw_nbvc1234 11 hours ago | parent | prev [-] |
| This sounds like a problem that could be solved around the corner with a caveat. Games generally are solvable for AI because they have feedback loops and a clear success or failure criteria. If the "picking up a Joystick" part is the limiting factor, sure. But why would we want robots to use an interface (especially a modern controller) heavily optimized for human hands; that seems like the definition of a horseless carriage. I'm sure if you compared a monkey and a dolphins performance using a joystick you'd get results that aren't really correlated with their intelligence. I would guess that if you gave robots an R2D2 like port to jack into and play a game, that problem could be solved relatively quickly. |
| |
| ▲ | xnickb 11 hours ago | parent | next [-] | | Just like OpenAI early on promised us an AGI and showed us how it "solved" Dota 2. They also claimed it "learned" to play by playing itself only however it was clear that most of the advanced techniques were borrowed from existing AI and by observing humans. No surprise they gave up on that project completely and I doubt they'll ever engage in anything like that again. Money better spent on different marketing platforms. | | |
| ▲ | jsheard 10 hours ago | parent | next [-] | | It also wasn't even remotely close to learning Dota 2 proper. They ran a massively simplified version of the game where the AI and humans alternated between playing one of two pre-defined team compositions, meaning >90% of the games characters and >99.999999% of the possible compositions and matchups weren't even on the table, plus other standard mechanics were also changed or disabled altogether for the sake of the AI team. Saying you've solved Dota after stripping out nearly all of its complexity is like saying you've solved Chess, but on a version where the back row is all Bishops. | | |
| ▲ | xnickb 10 hours ago | parent | next [-] | | Exactly. What I find surprising in this story though is not the OpenAI. It's investors not seeing through these blatant.. lets call them exaggerations of the reality and still trusting the company with their money. I know I wouldn't have. But then again, maybe that's why I'm poor. | | |
| ▲ | ryandrake 9 hours ago | parent | next [-] | | In their hearts, startup investors are like Agent Mulder: they Want To Believe. Especially after they’ve already invested a little. They are willing to overlook obvious exaggerations up to and including fraud, because the alternative is admitting their judgment is not sound. Look at how long Theranos went on! Miraculous product. Attractive young founder with all the right pedigree, credentials, and contacts, dressed in black trurtlenecks. Hell, she even talked like Steve Jobs! Investors never had a chance. | |
| ▲ | 10 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | jdross 9 hours ago | parent | prev [-] | | They already have 400 million daily users and a billion people using the product, with billions of consumer subscription revenue, faster than any company ever. They are also aggregating R&D talent at a density never before seen in Silicon Valley That is what investors see. You seem to treat this as a purity contest where you define purity | | |
| ▲ | zaphar 9 hours ago | parent | next [-] | | Also apparently still not making a profit. | | | |
| ▲ | xnickb 9 hours ago | parent | prev [-] | | I'm speaking about past events. Perhaps I didn't make it clear enough |
|
| |
| ▲ | rowanG077 8 hours ago | parent | prev | next [-] | | I agree that restricting the hero pool is a huge simplification. But they did play full 5v5 standard dota with just a restricted hero pool of 17 heroes and no illusions/control units according to theverge (https://www.theverge.com/2019/4/13/18309459/openai-five-dota...). It destroyed the professionals. As an ex dota player, I don't think this is that far off from having full on, all heroes dota. Certainly not as far of as you are making it sound. And dota is one of the most complex games, I expect for example that an AI would instantly solve CS since aim is such a large part of the game. | | |
| ▲ | Jensson 7 hours ago | parent | next [-] | | > It destroyed the professionals. Only the first time, later when it played better players it always lost. Players learned the faults of the AI after some time in game and the AI had very bad late game so they always won later. | | | |
| ▲ | mistercheph 7 hours ago | parent | prev [-] | | Another issue with the approach is that the model had direct access to game data, that is simply an unfair competitive advantage in dota, and it is obvious why that advantage would be unfair in CS. It is certainly possible, but
i won't be impressed by anything "playing CS" that isn't running a vision model on a display and moving a mouse, because that is the game. The game is not abstractly reacting to enemy positions and relocating the cursor, it's looking at a screen, seeing where the baddy is and then using this interface (the mouse) to get the cursor there as quickly as possible. It would be like letting an AI plot its position on the field and what action its taking during a football match and then saying "Look, The AI would have scored dozens of times in this simulation, it is the greatest soccer player in the world!" No, sorry, the game actually requires you to locomote, abstractly describing your position may be fun but it's not the game | | |
| ▲ | rowanG077 7 hours ago | parent [-] | | Did you read the paper? It had access to the dota 2 bot API, which is some gamestate but very far from all gamestate. It also had artifially limited reaction to something like 220ms, worse then professional gamers. But then again, that is precisely the point. A chess bot also has access to gigabytes of perfect working memory. I don't see people complaining about that. It's perfectly valid to judge the best an AI can do vs the best a human can do. It's not really fair to take away exactly what a computer is good at from an AI and then say: "Look but the AI is now worse". Else you would also have to do it the other way around. How well could a human play dota if it only had access to the bot API. I don't think they would do well at all. | | |
| ▲ | lukeschlather 2 hours ago | parent [-] | | > But then again, that is precisely the point. A chess bot also has access to gigabytes of perfect working memory. I don't see people complaining about that. There are ~86 billion neurons in the human brain. If we assume each neuron stores a single bit a human also has access to gigabytes of working memory. If we assume each synapse is a bit that's terabytes. Petabytes is not unreasonable assuming 1kb of storage per synapse. (And more than 1kb is also not unreasonable.) The whole point of the exercise is figuring out how much memory compares to a human brain. |
|
|
| |
| ▲ | scotty79 10 hours ago | parent | prev [-] | | It was 6 years ago. I'm sure now there'd be no contest now if OpenAI dedicated resources to it, which it won't because it's busy with solving entirety of human language before others eat their lunch. | | |
| ▲ | spektral23 9 hours ago | parent | next [-] | | Funnily enough, even dota2 has grown much more complex than it was 6 years ago, so it's a harder problem to solve today than it was back then | |
| ▲ | xnickb 10 hours ago | parent | prev [-] | | What do you base your certainty on? Were there any significant enough breakthroughs in the AGI? | | |
| ▲ | scotty79 10 hours ago | parent [-] | | ARC-AGI, while imagined as super hard for AI, was beaten enough that they had to come up with ARC-AGI-2. | | |
| ▲ | hbsbsbsndk 9 hours ago | parent [-] | | "AI tend to be brittle and optimized for specific tasks, so we made a new specific task and then someone optimized for it" isn't some kind of gotcha. Once ARC puzzles became a benchmark they ceased to be meaningful WRT "AGI". | | |
| ▲ | scotty79 4 hours ago | parent [-] | | So if DOTA became a benchmark same way Chess or Go became earlier it would be promptly beaten. It just didn't stick before people moved to more useful "games". |
|
|
|
|
| |
| ▲ | fennecfoxy 5 hours ago | parent | prev [-] | | To be fair humans have had quite a few million years across a growing population to gather all of the knowledge that we have. As we're learning with LLMs, the dataset is what matters - and what's awesome is that you can see that in us, as well! I've read that our evolution is comparatively slow to the rate of knowledge accumulation in the information age - and that what this means is that you can essentially take a caveman, raise them in our modern environment and they'll be just as intelligent as the average human today. But the core of our intelligence is logic/problem solving. We just have to solve higher order problems today, like figuring out how to make that chart in excel do the thing you want, but in days past it was figuring out how to keep the fire lit when it's raining. When you look at it, we've possessed the very core of that problem solving ability for quite a while now. I think that is the key to why we are human, and our close ancestors monkeys are...still just monkeys. It's that problem solving ability that we need to figure out how to produce within ML models, then we'll be cooking with gas! |
| |
| ▲ | mellosouls 11 hours ago | parent | prev | next [-] | | The point isn't about learning video games its about learning tasks unrelated to its specific competency generally. | |
| ▲ | jandrese 6 hours ago | parent | prev | next [-] | | > But why would we want robots to use an interface (especially a modern controller) heavily optimized for human hands; that seems like the definition of a horseless carriage. Elon's response to this is that if we want these androids to replace human jobs then the lowest friction alternative is for the android to be able to do anything a human can do in a human amount of space. A specialized machine is faster and more efficient, but comes with engineering and integration costs that create a barrier to entry. Elon learned this lesson the hard way when he was building out the gigafactories and ended up having to hire a lot of people to do the work while they sorted out the issues with the robots. To someone like Elon a payroll is an ever growing parasite on a companies bottom line, far better if the entire thing is automated. | |
| ▲ | jappgar 10 hours ago | parent | prev | next [-] | | A human would learn it faster, and could immediately teach other humans. AI clearly isn't at human level and it's OK to admit it. | |
| ▲ | johnb231 11 hours ago | parent | prev [-] | | No, the joystick part is really not the limiting factor. They’ve already done this with a direct software interface. Physical interface is a new challenge. But overall you are missing the point. |
|