| ▲ | standardUser 5 days ago |
| > The goal of AGI is to make programs that can do lots of things. What do Genie and GPT have to do with AGI? I'm sure the people who stand to make billions love to squint and see their LLM as only steps away from an AGI. Or that guy at Google who fell in love with one. But the rest of us know better. |
|
| ▲ | throwup238 5 days ago | parent | next [-] |
| Ostensibly a model like Genie3 probably encodes physical laws into the weights like LLMs encode language. That's generally considered a prerequisite for true AGI to have an intuitive grasp of physics as part of their "world model." It's a minor but significant step towards AGI (assuming Genie3 plays out successfully). |
|
| ▲ | therein 5 days ago | parent | prev | next [-] |
| Neither are even close to AGI. Here is something they can't do and won't be able to do for a very long time: If you're inferring in English and ask it a question, it will never be able to pull from the knowledge it has ingested in another language. Humans are able to do this without relying on a neurotic inner voice spinning around in circles and doing manual translations. This should be enough to arrive at the conclusion that there is no real insights in the model. It has no model of the world. |
| |
| ▲ | Jensson 4 days ago | parent [-] | | > If you're inferring in English and ask it a question, it will never be able to pull from the knowledge it has ingested in another language. Humans are able to do this without relying on a neurotic inner voice spinning around in circles and doing manual translations. This is not true, this is the biggest strength of LLM that they are very language agnostic since they can parse things down to more general concepts. There are many things they are bad at but using things from other languages is not one of them. | | |
| ▲ | therein 4 days ago | parent [-] | | It is true, and LLMs do no such things. You are getting that impression not because they are language agnostic across training and inference but because they are throwing multi-language text at it during training. Try asking it about nanomaterials in Chewa language. |
|
|
|
| ▲ | jaredklewis 5 days ago | parent | prev | next [-] |
| The debate over whether something is or is not AGI is entirely semantic and basically uninteresting. Let’s talk about what LLM agents demonstrably can or cannot do. That’s an interesting discussion and we can design experiments to validate the claims. Comparing how LLMs perform versus humans in various tasks is also interesting and verifiable. But trying to decide whether LLM agents have crossed over some amorphous, imaginary line with no quantitative definition is just a waste of time. It’s about equally productive as debating the question: “which is more human, an ant eater or a tree shrew?” Like why is there any value in that discussion? |
| |
| ▲ | tim333 5 days ago | parent | next [-] | | Most people seem to have a definition of AGI something like being able to think as well as a human in all regards. The debates on current stuff doing that are dull because the answer is no, but the future may be interesting. | | |
| ▲ | jaredklewis 5 days ago | parent [-] | | Right but “being able to think as a human in all regards” is a miserably vague definition that can’t be tested. To start, define think and specify which human. The best human? The average? Average or best by what metric? Without a quantitative definition, all views are basically valid and non-falsifiable. | | |
| ▲ | tim333 5 days ago | parent [-] | | I'm not so sure. Like at the moment AI robots can't fix your plumbing hence the hypothesis we have AGI is falsified. I suspect when it comes, AI will blast through the "the best human? The average? Average or best by what metric?" thing fairly rapidly like it did with chess, go and the like. | | |
| ▲ | jaredklewis 5 days ago | parent | next [-] | | So I’m fine with being able to fix some specified plumbing issue as being the AGI test, but it probably also means that humans don’t have AGI since it won’t be hard to find humans that can’t. But it doesn’t matter because that’s not the issue. The issue is that unless we all agree on that definition, then debates about AGI are just semantic equivocating. We all have our own idiolects of what AGI means for us, but like who cares? What everyone can agree on is that LLM agents cannot do plumbing now. This is observable and tells us interesting information about the capabilities of LLM agents. | | |
| ▲ | Jensson 4 days ago | parent [-] | | > but it probably also means that humans don’t have AGI since it won’t be hard to find humans that can’t. Humans can learn to fix it. Learning is a part of intelligence. The biggest misconception is thinking that a humans intelligence is based on what they can do today and not what they can learn to do in 10 years. And since the AI model has already trained to completion when you use it, it should be able to do whatever any human can learn to do, or it should be able to learn. With this definition AGI is not that complicated at all. | | |
| ▲ | jaredklewis 4 days ago | parent [-] | | That’s not what I was getting at. Did Stephen Hawking have GI? He was physically incapable of plumbing. There might be other limitations as well, but clearly the first hurdle between LLM agents and plumbing is any sort of interaction with the physical world. So a debate about AGI just becomes a debate about whether it includes interaction with the physical world and a billion other things. Anyone can redraw the semantic line anywhere that suits them. |
|
| |
| ▲ | chpatrick 5 days ago | parent | prev [-] | | I think plumbing is an overly high bar. Your friend who lives on a different continent can't fix your plumbing either but they're still generally intelligent. It's only a fair comparison if we compare it to a human also communicating via some channel. | | |
| ▲ | Jensson 4 days ago | parent [-] | | If they had 10 years to practice plumbing problems then they could walk you through it via video. | | |
| ▲ | chpatrick 3 days ago | parent [-] | | I don't think current AIs are too far from walking you through it though. |
|
|
|
|
| |
| ▲ | milesrout 5 days ago | parent | prev [-] | | [dead] |
|
|
| ▲ | dsadfjasdf 5 days ago | parent | prev [-] |
| The rest of us still can't prove we our conscious, either.. remember? |