| ▲ | epiccoleman 3 days ago |
| Would that really be a physics discovery? I mean I guess everything ultimately is. But it seems like maybe consciousness could be understood in terms of "higher level" sciences - somewhere on the chain of neurology->biology->chemistry->physics. |
|
| ▲ | mmoskal 3 days ago | parent | next [-] |
| Consciousness (subjective experience) is possibly orthogonal to intelligence (ability to achieve complex goals). We definitely have a better handle on what intelligence is than consciousness. |
| |
| ▲ | epiccoleman 3 days ago | parent [-] | | That does make sense, reminds me of Blindsight, where one central idea is that conscious experience might not even be necessary for intelligence (and possibly even maladaptive). |
|
|
| ▲ | marcosdumay 3 days ago | parent | prev | next [-] |
| > Would that really be a physics discovery? No, it could be something that proves all of our fundamental mathematics wrong. The GP just gave the more conservative option. |
| |
| ▲ | tshaddox 3 days ago | parent [-] | | I’m not sure what you mean. This new discovery in mathematics would also necessarily tell us something new about what is computable, which is physics. | | |
|
|
| ▲ | tshaddox 3 days ago | parent | prev [-] |
| That sounds like you’re describing AGI as being impractical to implement in an electronic computer, not impossible in principle. |
| |
| ▲ | epiccoleman 3 days ago | parent [-] | | Yeah, I guess I'm not taking a stance on that above, just wondering where in that chain holds the most explanatory power for intelligence and/or consciousness. I don't think there's any real reason to think intelligence depends on "meat" as its substrate, so AGI seems in principle possible to me. Not that my opinion counts for much on this topic, since I don't really have any relevant education on the topic. But my half baked instinct is that LLMs in and of themselves will never constitute true AGI. The biggest thing that seems to be missing from what we currently call AI is memory - and it's very interesting to see how their behavior changes if you hook up LLMs to any of the various "memory MCP" implementations out there. Even experimenting with those sorts of things has left me feeling there's still something (or many somethings) missing to take us from what is currently called "AI" to "AGI" or so-called super intelligence. | | |
| ▲ | kelnos 3 days ago | parent | next [-] | | > I don't think there's any real reason to think intelligence depends on "meat" as its substrate This made me think of... ok, so let's say that we discover that intelligence does indeed depend on "meat". Could we then engineer a sort of organic computer that has general intelligence? But could we also claim that this organic computer isn't a computer at all, but is actually a new genetically engineered life form? | |
| ▲ | mindcrime 3 days ago | parent | prev [-] | | But my half baked instinct is that LLMs in and of themselves will never constitute true AGI. I agree. But... LLM's are not the only game in town. They are just one approach to AI that is currently being pursued. The current dominant approach by investment dollars, attention, and hype, to be sure. But still far from the only thing around. |
|
|