| ▲ | altruios 2 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Okay. So to be clear, you believe that replicating/templating a brain is the ONLY way to make an intelligent machine? What makes you think that? That there are no other patterns of intelligence? | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | gslepak 2 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I can see how that would be implied by my comments so you're right to question that. The principles that are found in the brain are what gives qualification to "AGI", not the brain itself, so it's possible there are other architectures that would qualify. A few observations on LLMs that give the game away: - They require releases. You get a single binary blob and that blob is forever stuck at its so-called "intelligence" level. It never learns anything new. - They're stuck approaching the limit of human intelligence. This is because the technique cannot exceed human intelligence. I realize that OpenAI has made claims to the contrary, saying things like "oh our model found out some proof that was never proven before" — this doesn't count. It's a side effect of training on the Internet. In fact that proof probably did exist (in pieces) somewhere on the Internet, it just wasn't widely publicized. So, you'll know it's AGI when you no longer see companies releasing new models. AGI won't require new models because the architecture will be what matters as whatever models you have will be constantly updating themselves in real-time, just like the human brain does (and every other brain). And, you'll start to see the AIs actually outsmarting the smartest humans on the planet in every subject. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||