▲ | lukeschlather 5 days ago | ||||||||||||||||
The predictions in this paper are 100% correct. The author doesn't predict we would have ASI by now. They accurately predict that Moore's law would likely start to break down by 2012, and they also accurately predicted that EUV will allow further scaling beyond that barrier but that things will get harder. You may think LLMs are nothing like "real" AI but I'm curious what you think about the arguments in this paper and what sort of hardware is required for a "real" AI, if a "real" AI does not require hardware with in the neighborhood of 10^14 and 10^17 operations per second. Whether or not LLMs are the correct algorithm, the hardware question is much more straightforward and that's what this paper is about. | |||||||||||||||||
▲ | hyperpape 5 days ago | parent [-] | ||||||||||||||||
The entire discussion in the software section is about simulating the brain. > Creating superintelligence through imitating the functioning of the human brain requires two more things in addition to appropriate learning rules (and sufficiently powerful hardware): it requires having an adequate initial architecture and providing a rich flux of sensory input. > The latter prerequisite is easily provided even with present technology. Using video cameras, microphones and tactile sensors, it is possible to ensure a steady flow of real-world information to the artificial neural network. An interactive element could be arranged by connecting the system to robot limbs and a speaker. > Developing an adequate initial network structure is a more serious problem. It might turn out to be necessary to do a considerable amount of hand-coding in order to get the cortical architecture right. In biological organisms, the brain does not start out at birth as a homogenous tabula rasa; it has an initial structure that is coded genetically. Neuroscience cannot, at its present stage, say exactly what this structure is or how much of it needs to be preserved in a simulation that is eventually to match the cognitive competencies of a human adult. One way for it to be unexpectedly difficult to achieve human-level AI through the neural network approach would be if it turned out that the human brain relies on a colossal amount of genetic hardwiring, so that each cognitive function depends on a unique and hopelessly complicated inborn architecture, acquired over aeons in the evolutionary learning process of our species. | |||||||||||||||||
|