| ▲ | ComplexSystems a day ago |
| If this isn't AGI, what is? It seems unavoidable that an AI which can prove complex mathematical theorems would lead to something like AGI very quickly. |
|
| ▲ | pfdietz a day ago | parent | next [-] |
| Tao has a comment relevant to that question: "I doubt that anything resembling genuine "artificial general intelligence" is within reach of current #AI tools. However, I think a weaker, but still quite valuable, type of "artificial general cleverness" is becoming a reality in various ways. By "general cleverness", I mean the ability to solve broad classes of complex problems via somewhat ad hoc means. These means may be stochastic or the result of brute force computation; they may be ungrounded or fallible; and they may be either uninterpretable, or traceable back to similar tricks found in an AI's training data. So they would not qualify as the result of any true "intelligence". And yet, they can have a non-trivial success rate at achieving an increasingly wide spectrum of tasks, particularly when coupled with stringent verification procedures to filter out incorrect or unpromising approaches, at scales beyond what individual humans could achieve. This results in the somewhat unintuitive combination of a technology that can be very useful and impressive, while simultaneously being fundamentally unsatisfying and disappointing - somewhat akin to how one's awe at an amazingly clever magic trick can dissipate (or transform to technical respect) once one learns how the trick was performed. But perhaps this can be resolved by the realization that while cleverness and intelligence are somewhat correlated traits for humans, they are much more decoupled for AI tools (which are often optimized for cleverness), and viewing the current generation of such tools primarily as a stochastic generator of sometimes clever - and often useful - thoughts and outputs may be a more productive perspective when trying to use them to solve difficult problems." This comment was made on Dec. 15, so I'm not entirely confident he still holds it? https://mathstodon.xyz/@tao/115722360006034040 |
|
| ▲ | ben_w a day ago | parent | prev | next [-] |
| The "G" in "AGI" stands for "General". While quickly I noticed that my pre-ChatGPT-3.5 use of the term was satisfied by ChatGPT-3.5, this turned out to be completely useless for 99% of discussions, as everyone turned out to have different boolean cut-offs for not only the generality, but also the artificiality and the intelligence, and also what counts as "intelligence" in the first place. That everyone can pick a different boolean cut-off for each initial, means they're not really booleans. Therefore, consider that this can't drive a car, so it's not fully general. And even those AI which can drive a car, can't do so in genuinely all conditions expected of a human, just most of them. Stuff like that. |
| |
| ▲ | throw310822 a day ago | parent [-] | | > consider that this can't drive a car, so it's not fully general So blind people are not general intelligences? | | |
| ▲ | AxEy a day ago | parent [-] | | A blind person does not have the necessary input (sight data) to make the necessary computation. A car autopilot would. So no we do not deem a blind person to be unintelligent due to their lack of being able to drive without sight. But we might judge a sighted person as being not generally intelligent if they could not drive with sight. |
|
|
|
| ▲ | epolanski a day ago | parent | prev | next [-] |
| AGI in its standard definition requires matching or surpassing humans on all cognitive tasks, not just in some, especially some where only handful of humans took a stab on. |
| |
| ▲ | pfdietz a day ago | parent | next [-] | | Since no human could do that, are we to conclude no human is intelligent? | |
| ▲ | ACS_Solver a day ago | parent | prev [-] | | Surely AGI would be matching humans on most tasks. To me, surpassing humans on all cognitive tasks sounds like superintelligence, while AGI "only" need to perform most, but not necessarily all, cognitive tasks at the level of a human highly capable at that task. | | |
| ▲ | fc417fc802 a day ago | parent | next [-] | | Personally I could accept "most" provided that the failures were near misses as opposed to total face plants. I also wouldn't include "incompatible" tasks in the metric at all (but using that to game the metric can't be permitted either). For example the typical human only has so much working memory, so tasks which overwhelm that aren't "failed" so much as "incompatible". I'm not sure exactly what that looks like for ML but I expect the category will exist. A task that utilizes adversarial inputs might be an example of such. | |
| ▲ | epolanski 15 hours ago | parent | prev [-] | | Super intelligence is defined as outmatching the best humans in a field, but again, on all cognitive tasks, not just a subset. AI can already beat humans in pretty much any game like Go or Chess or many videogames, but that doesn't make it general. |
|
|
|
| ▲ | mkl a day ago | parent | prev [-] |
| This is very narrow AI, in a subdomain where results can be automatically verified (even within mathematics that isn't currently the case for most areas). |
| |
| ▲ | threethirtytwo a day ago | parent [-] | | Narrow AI? I’m not saying it’s AGI but this is not a narrow AI it’s a general AI given a narrow problem. ChatGPT. | | |
| ▲ | gf000 a day ago | parent [-] | | In a very specialized setup, in tandem with a verifier. Just because a specialized human placed in an F-16 can fly at Mach 2.0, doesn't mean humans in general can fly. | | |
| ▲ | threethirtytwo a day ago | parent [-] | | An apt analogy. A human is a general intelligence that can fly with an F-16. What happens when we put an artificial general intelligence in an F-16? That's what happened here with this proof. | | |
| ▲ | mkl 19 hours ago | parent [-] | | Not really. A completely unintelligent autopilot can fly an F-16. You cannot assume general intelligence from scaffolded tool-using success in a single narrow area. | | |
| ▲ | threethirtytwo 13 hours ago | parent [-] | | I didn’t assume agi. I assumed extreme performance of a general AI matching and exceeding average human intelligence when placed in an F16 or an equivalent cockpit specified for conducting math proofs. That’s not agi at all. I don’t think you understand that LLMs will never hit agi even when they exceed human intelligence in all applicable domains. The main reason is they don’t feel emotions. Even if the definition of agi doesn’t currently encompass emotions people like you will move the goal posts and shift the definition until it does. So as AI improves, the threshold will be adjusted to make sure they will never reach agi as it’s an existential and identity crisis to many people to admit that an AI is better than them on all counts. | | |
| ▲ | mkl 6 hours ago | parent [-] | | > I didn’t assume agi. You literally said: >>> What happens when we put an artificial general intelligence in an F-16? That's what happened here with this proof. You're claiming I said a lot of things I didn't; everything you seem to be stating about me in this comment is false. | | |
| ▲ | threethirtytwo 2 hours ago | parent [-] | | That's called a hypothetical. I didn't say that we put an AGI into an F-16. I asked what the outcome would be. And the outcome is pretty similar. Please read carefully before making a false statement. >You're claiming I said a lot of things I didn't; everything you seem to be stating about me in this comment is false. Apologies. I thought you were being deliberate. What really happened is you made a mistake. Also I never said anything about you. Please read carefully. |
|
|
|
|
|
|
|