| ▲ | glitchc 3 days ago |
| We don't know if AGI is even possible outside of a biological construct yet. This is key. Can we land on AGI without some clear indication of possibility (aka Chappie style)? Possibly, but the likelihood is low. Quite low. It's essentially groping in the dark. A good contrast is quantum computing. We know that's possible, even feasible, and now are trying to overcome the engineering hurdles. And people still think that's vaporware. |
|
| ▲ | tshaddox 3 days ago | parent | next [-] |
| > We don't know if AGI is even possible outside of a biological construct yet. This is key. A discovery that AGI is impossible in principle to implement in an electronic computer would require a major fundamental discovery in physics that answers the question “what is the brain doing in order to implement general intelligence?” |
| |
| ▲ | AIPedant 3 days ago | parent | next [-] | | It is vacuously true that a Turing machine can implement human intelligence: simply solve the Schrödinger equation for every atom in the human body and local environment. Obviously this is cost-prohibitive and we don’t have even 0.1% of the data required to make the simulation. Maybe we could simulate every single neuron instead, but again it’ll take many decades to gather the data in living human brains, and it would still be extremely expensive computationally since we would need to simulate every protein and mRNA molecule across billions of neurons and glial cells. So the question is whether human intelligence has higher-level primitives that can be implemented more efficiently - sort of akin to solving differential equations, is there a “symbolic solution” or are we forced to go “numerically” no matter how clever we are? | | |
| ▲ | walleeee 3 days ago | parent | next [-] | | > It is vacuously true that a Turing machine can implement human intelligence The case of simulating all known physics is stronger so I'll consider that. But still it tells us nothing, as the Turing machine can't be built. It is a kind of tautology wherein computation is taken to "run" the universe via the formalism of quantum mechanics, which is taken to be a complete description of reality, permitting the assumption that brains do intelligence by way of unknown combinations of known factors. For what it's worth, I think the last point might be right, but the argument is circular. Here is a better one. We can/do design narrow boundary intelligence into machines. We can see that we are ourselves assemblies of a huge number of tiny machines which we only partially understand. Therefore it seems plausible that computation might be sufficient for biology. But until we better understand life we'll not know. Whether we can engineer it or whether it must grow, and on what substrates, are also relevant questions. If it appears we are forced to "go numerically", as you say, it may just indicate that we don't know how to put the pieces together yet. It might mean that a human zygote and its immediate environment is the only thing that can put the pieces together properly given energetic and material constraints. It might also mean we're missing physics, or maybe even philosophy: fundamental notions of what it means to have/be biological intelligence. Intelligence human or otherwise isn't well defined. | | |
| ▲ | Davidzheng 3 days ago | parent [-] | | QM is a testable hypothesis, so I don't think it's necessarily like an axiomatic assumption here. I'm not sure what you mean by "it tells us nothing, as ... can't be built". It tells us there's no theoretical constraint and only an engineering constraint to doing simulating the human brain (and all the tasks) | | |
| ▲ | walleeee 3 days ago | parent [-] | | Sure, you can simulate a brain. If and when the simulation starts to talk you can even claim you understand how to build human intelligence in a limited sense. You don't know if it's a complete model of the organism until you understand the organism. Maybe you made a p zombie. Maybe it's conscious but lacks one very particular faculty that human beings have by way of some subtle phenomena you don't know about. There is no way to distinguish between a faithfully reimplemented human being and a partial hackjob that happens to line up with your blind spots without ontological omniscience. Failing that, you just get to choose what you think is important and hope it's everything relevant to behaviors you care about. |
|
| |
| ▲ | tshaddox 3 days ago | parent | prev | next [-] | | > It is vacuously true that a Turing machine can implement human intelligence: simply solve the Schrödinger equation for every atom in the human body and local environment. Yes, that is the bluntest, lowest level version of what I mean. To discover that this wouldn’t work in principle would be to discover that quantum mechanics is false. Which, hey, quantum mechanics probably is false! But discovering the theory which both replaces quantum mechanics and shows that AGI in an electronic computer is physically impossible is definitely a tall order. | | |
| ▲ | card_zero 3 days ago | parent [-] | | There's that aphorism that goes: people who thought the epitome of technology was a steam engine pictured the brain as pipes and connecting rods, people who thought the epitome of technology was a telephone exchange pictured the brain as wires and relays... and now we have computers, and the fact that they can in principle simulate anything at all is a red herring, because we can't actually make them simulate things we don't understand, and we can't always make them simulate things we do understand, either, when it comes down to it. We still need to know what the thing is that the brain does, it's still a hard question, and maybe it would even be a kind of revolution in physics, just not in fundamental physics. | | |
| ▲ | thfuran 3 days ago | parent [-] | | >We still need to know what the thing is that the brain does Yes, but not necessarily at the level where the interesting bits happen. It’s entirely possible to simulate poorly understood emergent behavior by simulating the underlying effects that give rise to it. | | |
| ▲ | card_zero 3 days ago | parent [-] | | Can I paraphrase that as make an imitation and hack it around until it thinks, or did I miss the point? |
|
|
| |
| ▲ | b_e_n_t_o_n 3 days ago | parent | prev | next [-] | | It's not even known if we can observe everything required to replicate consciousness. | |
| ▲ | Davidzheng 3 days ago | parent | prev | next [-] | | i'd argue LLMs and deep learning are much more on the intelligence from complexity side than the nice symbolic solution side of things. Probably the human neuron, though intrinsically very complex, has nice low loss abstractions to small circuits. But on the higher levels, we don't build artificial neural networks by writing the programs ourselves. | |
| ▲ | missingrib 3 days ago | parent | prev [-] | | That is only true if consciousness is physical and the result of some physics going on in the human brain. We have no idea if that's true. | | |
| ▲ | Timwi 2 days ago | parent | next [-] | | Whatever it is that gives rise to consciousness is, by definition, physics. It might not be known physics, but even if it isn't known yet, it's within the purview of physics to find out. If you're going to claim that it could be something that fundamentally can't be found out, then you're admitting to thinking in terms of magic/superstition. | |
| ▲ | amanaplanacanal 2 days ago | parent | prev [-] | | You got downvoted so I gave you an upvote to compensate. We seem to all be working with conflicting ideas. If we are strict materialists, and everything is physical, then in reality we don't have free will and this whole discussion is just the universe running on automatic. That may indeed be true, but we are all pretending that it isn't. Some big cognitive dissidence happening here. |
|
| |
| ▲ | manquer 3 days ago | parent | prev | next [-] | | Not necessarily , for a given definition of AGI you could have mathematical proof that it is incomputable similar to how Gödel incompleteness theorems work . It need not even be incomputable, it could be NP hard and practically be incomputable, or it could be undecidable I.e. a version of the halting problem. There are any number of ways our current models of mathematics or computation can in theory could be shown as not capable of expressing AGI without needing a fundamental change in physics | |
| ▲ | throwaway31131 3 days ago | parent | prev | next [-] | | We would also need a definition of AGI that is provable or disprovable. We don’t even have a workable definition, never mind a machine. | | |
| ▲ | thfuran 3 days ago | parent | next [-] | | Only if we need to classify things near the boundary. If we make something that’s better at every test that we can devise than any human we can find, I think we can say that no reasonable definition of AGI would exclude it without actually arriving at a definition. | |
| ▲ | tshaddox 3 days ago | parent | prev [-] | | We don’t need such a definition of general intelligence to conclude that biological humans have it, so I’m not sure why we’d such a definition for AGI. | | |
| ▲ | kelnos 3 days ago | parent | next [-] | | I disagree. We claim that biological humans have general intelligence because we are biased and arrogant, and experience hubris. I'm not saying we aren't generally intelligent, but a big part of believing we are is because not believing so would be psychologically and culturally disastrous. I fully expect that, as our attempts at AGI become more and more sophisticated, there will be a long period where there are intensely polarizing arguments as to whether or not what we've built is AGI or not. This feels so obvious and self-evident to me that I can't imagine a world where we achieve anything approaching consensus on this quickly. If we could come up with a widely-accepted definition of general intelligence, I think there'd be less argument, but it wouldn't preclude people from interpreting both the definition and its manifestation in different ways. | | |
| ▲ | Davidzheng 3 days ago | parent | next [-] | | I can say it. Humans are not "generally intelligent". We are intelligent in a distribution of environments which are similar enough to ones we are used to. There's no way to be intelligent with no priors on environment basically by information theory (you can make your environment to be adversarial to the learning efficiency in "intelligent" beings which comes from priors) | |
| ▲ | mindcrime 3 days ago | parent | prev | next [-] | | We claim that biological humans have general intelligence because we are biased and arrogant, and experience hubris. No, we say it because - in this context - we are the definition of general intelligence. Approximately nobody talking about AGI takes the "G" to stand for "most general possible intelligence that could ever exist." All it means is "as general as an average human." So it doesn't matter if humans are "really general intelligence" or not, we are the benchmark being discussed here. | | |
| ▲ | mindcrime 3 days ago | parent [-] | | If you don't believe me, go back to the introduction of the term[1]: By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be "conscious" or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle. It's pretty clear here that the notion of "artificial general intelligence" is being defined as relative to human intelligence. Or see what Ben Goertzel - probably the one person most responsible for bringing the term into mainstream usage - had to say on the issue[2]: “Artificial General Intelligence”, AGI for short, is a term adopted by some researchers to refer to their research field. Though not a precisely defined technical term, the term is used to stress the “general” nature of the desired capabilities of the systems being researched -- as compared to the bulk of mainstream Artificial Intelligence (AI) work, which focuses on systems with very specialized “intelligent” capabilities. While most existing AI projects aim at a certain aspect or application of intelligence, an AGI project aims at “intelligence” as a whole, which has many aspects, and can be used in various situations. There is a loose relationship between “general intelligence” as
meant in the term AGI and the notion of “g-factor” in psychology [1]: the g-factor is an attempt to measure general intelligence, intelligence across various domains, in humans. Note the reference to "general intelligence" as a contrast to specialized AI's (what people used to call "narrow AI" even though he doesn't use the term here). And the rest of that paragraph shows that the whole notion is clearly framed in terms of comparison to human intelligence. That point is made even more clear when the paper goes on to say: Modern learning theory has made clear that the only way to achieve maximally
general problem-solving ability is to utilize infinite computing power. Intelligence given limited computational resources is always going to have limits to its generality. The human mind/brain, while possessing extremely general capability, is best at solving the types of problems which it has specialized circuitry to handle (e.g. face recognition, social learning, language learning; Note that they chose to specifically use the more precise term "maximally general problem solving ability when referring to something beyond the range of human intelligence, and then continued to clearly show that the overall idea is - again - framed in terms of human intelligence. One could also consult Marvin Minsky's words[3] from back around the founding of the overall field of "Artificial Intelligence" altogether: “In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. Simply put, with a few exceptions, the vast majority of people working in this space simply take AGI to mean something approximately like "human like intelligence". That's all. No arrogance or hubris needed. [1]: https://web.archive.org/web/20110529215447/http://www.foresi... [2]: https://goertzel.org/agiri06/%255B1%255D%2520Introduction_No... [3]: https://www.science.org/doi/10.1126/science.ado7069 |
| |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | bastawhiz 3 days ago | parent | prev [-] | | Well general intelligence in humans already exists, whereas general intelligence doesn't yet exist in machines. How do we know when we have it? You can't even simply compare it to humans and ask "is it able to do the same things?" because your answer depends on what you define those things to be. Surely you wouldn't say that someone who can't remember names or navigate without GPS lacks general intelligence, so it's necessary to define what criteria are absolutely required. | | |
| ▲ | tshaddox 3 days ago | parent | next [-] | | > You can't even simply compare it to humans and ask "is it able to do the same things?" because your answer depends on what you define those things to be. Right, but you can’t compare two different humans either. You don’t test each new human to see if they have it. Somehow we conclude that humans have it without doing either of those things. | | |
| ▲ | Jensson 3 days ago | parent [-] | | > You don’t test each new human to see if they have it We do, its called school and we label some humans with different learning disabilities. Some of those learning disabilities are grave enough that they can't learn to do tasks we expect humans to be able to learn, such humans can be argued to not posses the general intelligence we expect from humans. Interacting with an LLM today is like interacting with an Alzheimer patient, they can do things they already learned well but poke at it and it all falls apart and they start repeating themselves, they can't learn. | | |
| ▲ | tshaddox 2 days ago | parent [-] | | Yes, there are diseases, injuries, etc. which can impair a human’s cognitive abilities. Sometimes those impairments are so severe that we don’t consider the human to be intelligent (or even alive!). But note that we still make this distinction without anything close to a rigorous formal definition of general intelligence. |
|
| |
| ▲ | jibal 3 days ago | parent | prev [-] | | How do we know when a newborn has achieved general intelligence? We don't need a definition amenable to proof. | | |
| ▲ | Jensson 3 days ago | parent | next [-] | | Its a near clone of a model that already has it, we don't need to prove it has general intelligence we just assume it does because most do have it. | |
| ▲ | jibal 2 days ago | parent | prev [-] | | P.S. The response is just an evasion. |
|
|
|
| |
| ▲ | aorloff 3 days ago | parent | prev | next [-] | | A question which will be trivial to answer once you properly define what you mean by "brain" Presumably "brains" do not do many of the things that you will measure AGI by, and your brain is having trouble understanding the idea that "brain" is not well understood by brains. Does it make it any easier if we simplify the problem to: what is the human doing that makes (him) intelligent ? If you know your historical context, no. This is not a solved problem. | | |
| ▲ | tshaddox 3 days ago | parent [-] | | > Does it make it any easier if we simplify the problem to: what is the human doing that makes (him) intelligent ? Sure, it doesn’t have to be literally just the brain, but my point is you’d need very new physics to answer the question “how does a biological human have general intelligence?” | | |
| ▲ | aorloff 3 days ago | parent [-] | | Suppose dogs invent their own idea of intelligence but they say only dogs have it. Do we think new physics would be required to validate dog intelligence ? | | |
| ▲ | tshaddox 3 days ago | parent [-] | | The claim that only dogs have intelligence is open for criticism, just like every other claim. I’m not sure what your point is, because the source of the claim is irrelevant anyway. The reason I think that humans have general intelligence is not that humans say that they have it. |
|
|
| |
| ▲ | 3 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | epiccoleman 3 days ago | parent | prev | next [-] | | Would that really be a physics discovery? I mean I guess everything ultimately is. But it seems like maybe consciousness could be understood in terms of "higher level" sciences - somewhere on the chain of neurology->biology->chemistry->physics. | | |
| ▲ | mmoskal 3 days ago | parent | next [-] | | Consciousness (subjective experience) is possibly orthogonal to intelligence (ability to achieve complex goals). We definitely have a better handle on what intelligence is than consciousness. | | |
| ▲ | epiccoleman 3 days ago | parent [-] | | That does make sense, reminds me of Blindsight, where one central idea is that conscious experience might not even be necessary for intelligence (and possibly even maladaptive). |
| |
| ▲ | marcosdumay 3 days ago | parent | prev | next [-] | | > Would that really be a physics discovery? No, it could be something that proves all of our fundamental mathematics wrong. The GP just gave the more conservative option. | | |
| ▲ | tshaddox 3 days ago | parent [-] | | I’m not sure what you mean. This new discovery in mathematics would also necessarily tell us something new about what is computable, which is physics. | | |
| |
| ▲ | tshaddox 3 days ago | parent | prev [-] | | That sounds like you’re describing AGI as being impractical to implement in an electronic computer, not impossible in principle. | | |
| ▲ | epiccoleman 3 days ago | parent [-] | | Yeah, I guess I'm not taking a stance on that above, just wondering where in that chain holds the most explanatory power for intelligence and/or consciousness. I don't think there's any real reason to think intelligence depends on "meat" as its substrate, so AGI seems in principle possible to me. Not that my opinion counts for much on this topic, since I don't really have any relevant education on the topic. But my half baked instinct is that LLMs in and of themselves will never constitute true AGI. The biggest thing that seems to be missing from what we currently call AI is memory - and it's very interesting to see how their behavior changes if you hook up LLMs to any of the various "memory MCP" implementations out there. Even experimenting with those sorts of things has left me feeling there's still something (or many somethings) missing to take us from what is currently called "AI" to "AGI" or so-called super intelligence. | | |
| ▲ | kelnos 3 days ago | parent | next [-] | | > I don't think there's any real reason to think intelligence depends on "meat" as its substrate This made me think of... ok, so let's say that we discover that intelligence does indeed depend on "meat". Could we then engineer a sort of organic computer that has general intelligence? But could we also claim that this organic computer isn't a computer at all, but is actually a new genetically engineered life form? | |
| ▲ | mindcrime 3 days ago | parent | prev [-] | | But my half baked instinct is that LLMs in and of themselves will never constitute true AGI. I agree. But... LLM's are not the only game in town. They are just one approach to AI that is currently being pursued. The current dominant approach by investment dollars, attention, and hype, to be sure. But still far from the only thing around. |
|
|
| |
| ▲ | add-sub-mul-div 3 days ago | parent | prev | next [-] | | It doesn't have to be impossible in principle, just impossible given how little we understand consciousness or will anytime in the next century. Impossible for all intents and purposes for anyone living today. | | |
| ▲ | tshaddox 3 days ago | parent [-] | | > Impossible for all intents and purposes for anyone living today. Sure, but tons of things which are obviously physically possible are also out of reach for anyone living today. |
| |
| ▲ | slashdave 3 days ago | parent | prev | next [-] | | That question is not a physics question | |
| ▲ | lll-o-lll 3 days ago | parent | prev | next [-] | | It’s not really “what is the brain doing”; that path leads to “quantum mysticism”. What we lack is a good theoretical framework about complex emergence. More maths in this space please. Intelligence is an emergent phenomenon; all the interesting stuff happens at the boundary of order and disorder but we don’t have good tools in this space. | |
| ▲ | ysavir 3 days ago | parent | prev | next [-] | | Seems the opposite way round to me. We couldn't conclusively say that AGI is possible in principle until some physics (or rather biology) discovery explains how it would be possible. Until then, anything we engineer is an approximation as best. | |
| ▲ | bilsbie 3 days ago | parent | prev [-] | | Not necessarily. It could simply be a question of scale. Being analog and molecular means that brain could be doing enormously more than any foreseeable computer. For a simple example what if every neuron is doing trillions of calculations. (I’m not saying it is, just that it’s possible) | | |
| ▲ | tshaddox 3 days ago | parent [-] | | I think you’re merely referring to what is feasible in practice to computer with our current or near-future computers. I was referring to what is computable in principle. | | |
| ▲ | bilsbie 2 days ago | parent [-] | | Right. That’s what I was responding to. OP wrote:
> We don't know if AGI is even possible outside of a biological construct yet And you replied that means it’s impossible in principle. I’m correcting you in saying that it can be impossible in ways other than principle. |
|
|
|
|
| ▲ | sixo 3 days ago | parent | prev | next [-] |
| On the contrary, we have one working example of general intelligence (humans) and zero of quantum computing. |
| |
| ▲ | 3 days ago | parent | next [-] | | [deleted] | |
| ▲ | glitchc 3 days ago | parent | prev | next [-] | | That's covered in the biological construct part. And no, we definitely do have quantum computers. They're just not practical yet. | |
| ▲ | bee_rider 3 days ago | parent | prev | next [-] | | Do we have a specific enough definition of general intelligence that we can exclude all non-human animals? | | |
| ▲ | mattnewton 3 days ago | parent | next [-] | | Why does it need to exclude all non human animals? Could it not be a difference of degree rather than of kind? | | |
| ▲ | bee_rider 3 days ago | parent | next [-] | | The post I was responding to had > On the contrary, we have one working example of general intelligence (humans) I think some animals probably have what most people would informally call general intelligence, but maybe there’s some technical definition that makes me wrong. | | |
| ▲ | mitthrowaway2 3 days ago | parent | next [-] | | Their point is not in any way weakened if you read "one working example" as "at least one working example". | | |
| ▲ | bee_rider 3 days ago | parent [-] | | Oh, good point, I hadn’t noticed the alternative reading. That makes sense, then. |
| |
| ▲ | dsubburam 3 days ago | parent | prev | next [-] | | I do not know how "general intelligence" is defined, but there are a set of features we humans have that other animals mostly don't, as per the philosopher Roger Scruton[1], that I am reproducing from memory (errors mine): 1. Animals have desires, but do not make choices We can choose to do what we do not desire, and choose not to do what we desire. For animals, one does not need to make this distinction to explain their behavior (Occam's razor)--they simply do what they desire. 2. Animals "live in a world of perception" (Schopenhauer) They only engage with things as they are. They do not reminisce about the past, plan for the future, or fantasize about the impossible. They do not ask "what if?" or "why?". They lack imagination. 3. Animals do not have the higher emotions that require a conceptual repertoire such as regret, gratitude, shame, pride, guilt, etc. 4. Animals do not form complex relationships with others Because it requires the higher emotions like gratitude and resentment, and concepts such as rights and responsibilities. 5. Animals do not get art or music We can pay disinterested attention to a work of art (or nature) for its own sake, taking pleasure from the exercise of our rational faculties thereof. 6. Animals do not laugh I do not know if the science/philosophy of laughter is settled, but it appears to me to be some kind of phenomenon that depends on civil society. 7. Animals lack language in the full sense of being able to engage in reason-giving dialogue with others, justifying your actions and explaining your intentions. Scruton believed that all of the above arise together. I know this is perhaps a little OT, but I seldom if ever see these issues mentioned in discussions about AGI. Maybe less applicable to super-intelligence, but certainly applicable to the "artificial human" part of the equation. [1] Philosophy: Principles and Problems. Roger Scruton | |
| ▲ | jibal 3 days ago | parent | prev [-] | | If some animals also have general intelligence then we have more than one example, so this simply isn't relevant. |
| |
| ▲ | Fricken 3 days ago | parent | prev [-] | | We're fixated on human intelligence but a computer cannot even emulate the intelligence of a honeybee or an ant. | | |
| ▲ | fastball 3 days ago | parent [-] | | How do you mean? AFAICT computers can definitely do that. Sure, it won't be the size of an ant, but we definitely have models running on computers that have much more complexity than the life of an ant. | | |
| ▲ | Jensson 3 days ago | parent | next [-] | | > Sure, it won't be the size of an ant, but we definitely have models running on computers that have much more complexity than the life of an ant. Do we? Where is the model that can run an ant and navigate a 3d environment, parse visuals and different senses to orient itself, figure out where it can climb to get to where it needs to go. Then put that in an average forest and navigate trees and other insects and try to cooperate with other ants and find its way back. Or build an anthill, an ant can build an anthill, full of tunnels everywhere that doesn't collapse without using a plan. Do we have such a model? I don't think we have anything that can do that yet. Waymo is trying to solve a much simpler problem and they still struggle, so I am pretty sure we still can't run anything even remotely as complex as an ant. Maybe a simple worm, but not an ant. | |
| ▲ | Fricken 3 days ago | parent | prev [-] | | Having aptitude in mathematics was once considered the highest form of human intelligence, yet a simple pocket calculator can beat the pants off most humans at arithmetic tasks. Conversely, something we regard as simple, such as selecting a key from a keychain and using to unlock a door not previously encountered is beyond the current abilities of any machine. I suspect you might be underestimating the real complexity of what bees and ants do. Self-driving cars as well seemed like a simpler problem before concerted efforts were made to build one. | | |
| ▲ | dragonwriter 3 days ago | parent [-] | | > Having aptitude in mathematics was once considered the highest form of human intelligence, yet a simple pocket calculator can beat the pants off most humans at arithmetic tasks. Mathematics has been a lot more than arithmetic for... a very long time. | | |
| ▲ | Jensson 3 days ago | parent [-] | | But arithmetics was seen as requiring intelligence, as did chess. |
|
|
|
|
| |
| ▲ | jibal 3 days ago | parent | prev [-] | | No one said "exclusively humans", and that's not relevant. |
| |
| ▲ | adastra22 3 days ago | parent | prev [-] | | There are many working quantum computers… | | |
| ▲ | sixo 2 days ago | parent [-] | | ah, I mean, working in the sense of OP: that a system which overcomes the "engineering hurdles" is actually feasible and will be successful. To be blocked merely by "engineering hurdles" puts QC in approximately the same place as fusion. | | |
| ▲ | adastra22 2 days ago | parent [-] | | There are working quantum computers that are not only feasible, but exist, can be rented on the cloud, and are people pay money to use. Whether these are a commercial success at this point in time is missing the forest for the trees. A LOT of money has been put into getting as far as we have, and the limited market for using these machines at the moment means that getting a return on investment right now is difficult. But this is/has been true of every new technology. And quantum computers are getting better & more energy efficient year-by-year. |
|
|
|
|
| ▲ | singpolyma3 3 days ago | parent | prev | next [-] |
| This makes no sense. If you believe in eg a mind or soul then maybe it's possible we cannot make AGI. But if we are purely biological then obviously it's possible to replicate that in principle. |
| |
| ▲ | DrewADesign 3 days ago | parent [-] | | That doesn’t contradict what they said. We may one day design a biological computing system that is capable of it. We don’t entirely understand how neurons work; it’s reasonable to posit that the differences that many AGI boosters assert don’t matter do matter— just not in ways we’ve discovered yet. | | |
| ▲ | kelnos 3 days ago | parent | next [-] | | I mentioned this in another thread, but I do wonder if we engineer a sort of biological computer, will it really be a computer at all, and not a new kind of life itself? | | |
| ▲ | jakeydus 3 days ago | parent | next [-] | | > not a new kind of life itself? In my opinion, this is more a philosophical question than an engineering one. Is something alive because it’s conscious? Is it alive because it’s intelligent? Is a virus alive, or a bacteria, or an LLM? Beats me. | |
| ▲ | DrewADesign 2 days ago | parent | prev [-] | | Maybe — though we’d still have engineered it, which is the point I was trying to make. |
| |
| ▲ | slashdave 3 days ago | parent | prev [-] | | We understand how neurons work to quite a bit of detail. | | |
| ▲ | DrewADesign 3 days ago | parent [-] | | The Allen Institute doesn’t seem to think so. We don’t even know how the brain of a roundworm ticks and it’s only got 302 neurons— all of which are mapped, along with their connections. |
|
|
|
|
| ▲ | jibal 3 days ago | parent | prev | next [-] |
| It's not "key"; it's not even relevant ... the proof will be in the pudding. Proving a priori that some outcome is possible plays no role in achieving it. And you slid, motte-and-bailey-like, from "know" to "some clear indication of possibility" -- we have extremely clear indications that it's possible, since there's no reason other than a belief in magic to think that "biological" is a necessity. Whether is feasible or practical or desirable to achieve AGI is another matter, but the OP lays out multiple problem areas to tackle. |
|
| ▲ | root_axis 3 days ago | parent | prev | next [-] |
| The practical feasibility of quantum computing is definitely still an open research question. |
|
| ▲ | 3 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | DrewADesign 3 days ago | parent | prev | next [-] |
| Sometimes I think we’re like cats that learned how to make mirrors without really understanding them, and are so close to making one good enough that the other cat becomes sentient. |
|
| ▲ | slashdave 3 days ago | parent | prev [-] |
| > We don't know if AGI is even possible outside of a biological construct yet Of course it is. A brain is just a machine like any other. |
| |
| ▲ | glitchc 3 days ago | parent [-] | | Except we don't understand how the brain actually works and have yet to build a machine that behaves like it. |
|