▲ | IanCal 6 days ago | ||||||||||||||||||||||||||||
Tbh I find this view odd, and I wonder what people view as agi now. It used to be that we had extremely narrow pieces of AI and I remember being on a research project about architectures and just very basic “what’s going on?” was advanced. Understanding that someone asked a question, that would be solved by getting a book and being able to then go and navigate to the place the book was likely to be was fancy. Most systems could solve literally one type of problem. They weren’t just bad at other things they were fundamentally incapable of anything but an extremely narrow use case. I can throw wide ranging problems at things like gpt5 and get what seem like dramatically better answers than if I asked a random person. The amount of common sense is so far beyond what we had it’s hard to express. It used to be always pointed out that the things we had were below basic insect level. Now I have something that can research a charity, find grants and make coherent arguments for them, read matrix specs and debug error messages, and understand sarcasm. To me, it’s clear that agi is here. But then what I always pictured from it may be very different to you. What’s your image of it? | |||||||||||||||||||||||||||||
▲ | whizzter 6 days ago | parent | next [-] | ||||||||||||||||||||||||||||
It's more that "random" people are dumb as bricks (but we've in the name of equality and historic measurement errors decided to forgo that), add to it that AI's have a phenomenal (internet sized) memory makes them far more capable than many people. However, even "dumb" people can often make judgements structures in a way that AI's cannot, it's just that many have such a bad knowledge-base that they cannot build the structures coherently whereas AI's succeed thanks to their knowledge. I wouldn't be surprised if the top AI firms today spend an inordinate amount of time to build "manual" appendages into the LLM systems to cater to tasks such as debugging to uphold the facade that the system is really smart, while in reality it's mostly papering up a leaky model to avoid losing the enormous investments they need to stay alive with a hope that someone on their staff comes up a real solution to self-learning. https://magazine.sebastianraschka.com/p/understanding-reason... | |||||||||||||||||||||||||||||
▲ | adwn 6 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
I think the discrepancy between different views on the matter mainly stems from the fact that state-of-the-art LLMs are better (sometimes extremely better) at some tasks, and worse (sometimes extremely worse) at other tasks, compared to average humans. For example, they're better at retrieving information from huge amounts of unstructured data. But they're also terrible at learning: any "experience" which falls out of the context window is lost forever, and the model can't learn from its mistakes. To actually make it learn something requires very many examples and a lot of compute, whereas a human can permanently learn from a single example. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
▲ | Yoric 6 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
I think it's clear that nobody agrees what AGI is. OpenAI describes it in terms of revenue. Other people/orgs in terms of, essentially, magic. If I had to pick a name, I'd probably describe ChatGPT & co as advanced proof of concepts for general purpose agents, rather than AGI. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
▲ | boppo1 6 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
Human-level intelligence. Being able to know what it doesn't know. Having a practical grasp on the idea of truth. Doing math correctly, every time. I give it a high-res photo of a kitchen and ask it to calculate the volume of a pot in the image. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
▲ | audunw 5 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
I don’t have a very high expectation of AGI at all. Just an algorithm or system you can put onto a robot dog, and get a dog level general intelligence. You should be able to live with that robot dog for 10 years and it should be just as capable as a dog throughout that timespan. Hell, I’d even say we have AGI if you could emulate something like a hamster. LLMs are way more impressive in certain ways than such a hypothetical AGI. But that has been true of computers for a long time. Computers have been much better at Chess than humans for decades. Dogs can’t do that. But that doesn’t mean that a chess engine is an AGI. I would also say we have a special form of AGI if the AI can pass an extended Turing test. We’ve had chat bots that can fool a human for a minute for a long time. Doesn’t mean we had AGI. So time and knowledge was always a factor in a realistic Turing test. If an AGI can fool someone who knows how to properly probe an LLM, for a month or so, while solving a bunch of different real world tasks that require stable long term memory and planning, then I’d day we’re in AGI territory for language specifically. I think we have to distinguish between language AGI and multi-modal AGI. So this test wouldn’t prove what we could call “full” AGI. These are some of the missing components for full AGI: - Being able to act as a stable agent with a stable personality over long timespans - Capable of dealing with uncertainties. Having a understanding of what it doesn’t know - One-shot learning, with long term retention, for a large number of things - Fully integrated multi-modality across sound, vision, and other inputs/outputs we may throw at it. The last one is where we may be able to get at the root of the algorithm we’re missing. A blind person can learn to “see” by making clicks and using their ears to see. Animals can do similar “tricks”. I think this is where we truly see the full extent of the generality and adaptability of the biological brain. Imagine trying to make a robot that can exhibit this kind of adaptability. It doesn’t fit into the model we have for AI right now. | |||||||||||||||||||||||||||||
▲ | homarp 6 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
my picture of AGI is 1) autonomous improvement 2) ability to say 'i don't know/can't be done' | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
▲ | AlienRobot 6 days ago | parent | prev [-] | ||||||||||||||||||||||||||||
Nobody is saying that LLM's don't work like magic. I know how neural networks work and they still feel like voodoo to me. What we are saying is that LLM's can't become AGI. I don't know what AGI will look like, but it won't look like an LLM. There is a difference between being able to melt iron and being able to melt tungsten. |