| ▲ | kachapopopow 2 days ago |
| AGI is already here if you shift some goal posts :) From skimming the conversation it seems to mostly revolve around LLMs (transformer models) which is probably not going to be the way we obtain AGI to begin with, frankly it is too simple to be AGI, but the reason why there's so much hype is because it is simple to begin with so really I don't know. |
|
| ▲ | ecocentrik a day ago | parent | next [-] |
| LLMs are close enough to pass the Turing Test. That was a huge milestone. They are capable of abstract reasoning and can perform many tasks very well but they aren't AGI. They can't teach themselves to play chess at the level of a dedicated chess engine or fly an airplane using the same model they use to copypasta a React UI. They can only fool non-proficient humans into believing that they might be capable of doing those things. |
| |
| ▲ | password54321 a day ago | parent [-] | | Turing Test was a thought experiment not a real benchmark for intelligence. If you read the paper the idea originated from it is largely philosophical. As for abstract reasoning, if you look at ARC-2 it is barely capable though at least some progress has been made with the ARC-1 benchmark. | | |
| ▲ | ecocentrik 19 hours ago | parent [-] | | I wasn't claiming the Turing Test was a benchmark for intelligence but the ability to fool a human into thinking a machine is intelligent in conversation is still a significant milestone. I should have said "some abstract reasoning". ARC-2 looks promising. | | |
| ▲ | password54321 17 hours ago | parent [-] | | >I wasn't claiming the Turing Test was a benchmark for intelligence but the ability to fool a human into thinking a machine is intelligent in conversation is still a significant milestone. The Turing Test is whether it can fool a human into thinking it is talking to another human not an intelligent machine. And ironically this is becoming less true over time as people become more used to spotting the tendencies LLMs have with writing such as its frequent use of dashes or "it's not just X it is Y" type of statements. |
|
|
|
|
| ▲ | tim333 a day ago | parent | prev | next [-] |
| I think most people think of AGI as able to do the stuff humans do and it's still missing a fair bit there. |
|
| ▲ | throwaway-0001 a day ago | parent | prev [-] |
| A transistor is very simple too, and here we are. Don’t dismiss something because it’s simple. |
| |
| ▲ | password54321 a day ago | parent [-] | | You got to look at how it scales. LLMs have already stopped increasing in parameter count as they don't get better by scaling them up anymore. New ideas are needed. | | |
| ▲ | throwaway-0001 a day ago | parent [-] | | You’re right… but still, what was done until today is significant and useful already |
|
|