▲ | ActorNightly 2 days ago | ||||||||||||||||||||||
Not a decade. More like a century, and that is if society figures itself out enough to do some engineering on a planetary scale, and quantum computing is viable. Fundamentally, AGI requires 2 things. First it needs to be able to operate without information, learning as it goes. The core kernel should be such that it doesn't have any sort of training on real world concepts, only general language parsing that it can use to map to some logic structure to be able to determine a plan of action. So for example, if you give the kernel the ability to send ethernet packets, it should eventually figure out how to talk tls to communicate with the modern web, even if that takes an insane amount of repetition. The reason for this is that you want the kernel to be able to find its way through any arbitrarily complex problem space. Then as it has access to more data, whether real time, or in memory, it can be more and more efficient. This part is solvable. After all, human brains do this. A single rack of Google TPUs is roughly the same petaflops as a human brain operating at max capacity if you assume neuron activation is a add-multiply and firing speed of 200 times/second, and humans don't use all of their brain all the time. The second part that makes the intelligence general is the ability to simulate reality faster than reality. Life is imperative by nature, and there are processes with chaotic effects (human brains being one of them), that have no good mathematical approximations. As such, if an AGI can truly simulate a human brain to be able to predict behavior, it needs to do this at an approximation level that is good enough, but also fast enough to where it can predict your behavior before you exhibit it, with overhead in also running simulations in parallel and figuring out the best course of actions. So for a single brain, you are looking at probably a full 6 warehouses full of TPUs. | |||||||||||||||||||||||
▲ | dist-epoch a day ago | parent | next [-] | ||||||||||||||||||||||
AIs already fake-simulate the weather (chaotic system) using 1% of the resources used by the real-simulating supercomputers. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | ctoth 2 days ago | parent | prev [-] | ||||||||||||||||||||||
You want a "core kernel" with "general language parsing" but no training on real-world concepts. Read that sentence again. Slowly. What do you think "general language parsing" IS if not learned patterns from real-world data? You're literally describing a transformer and then saying we need to invent it. And your TLS example is deranged. You want an agent to discover the TLS protocol by randomly sending ethernet packets? The combinatorial search space is so large this wouldn't happen before the sun explodes. This isn't intelligence! This is bruteforce with extra steps! Transformers already ARE general algorithms with zero hardcoded linguistic knowledge. The architecture doesn't know what a noun is. It doesn't know what English is. It learns everything from data through gradient descent. That's the entire damn point. You're saying we need to solve a problem that was already solved in 2017 while claiming it needs a century of quantum computing. | |||||||||||||||||||||||
|