| ▲ | A_D_E_P_T 17 hours ago |
| Oh man, Stephen Wolfram and Jürgen Schmidthuber are probably fuming at the fact that this is called a "new" mathematical framework. It's all very old, and quite conventional, even popular -- not exactly the road not taken. What the author did was use the Physical Church-Turing thesis, and Kleene's second recursion theorem, to show that: (1) If a universe’s dynamics are computable (PCT), and (2) the universe can implement universal computation (RPCT), then (3) the universe can simulate itself, including the computer doing the simulating. That's basically all. And thus "there would be two identical instances of us, both equally 'real'." (Two numerically distinct processes are empirically identical if they are indistinguishable. You might remember this sort of thing from late 20th c. philosophy coursework.) He also uses Rice’s theorem (old) to show that there is no uniform measure over the set of "possible universes." It's all very interesting, but it's more a review article than a "new mathematical framework." The notion of a mathematical/simulated universe is as old as Pythagoras (~550 BC), and Rice, Church-Turing, and Kleene are all approaching the 100-year mark. |
|
| ▲ | HPsquared 16 hours ago | parent | next [-] |
| I'm no mathematician, but doesn't this come up against Gödel's incompleteness theorem? My brain has that roughly as "If you have a system and a model of that system, but the model is also part of the same system, something something, impossible" |
| |
| ▲ | keepamovin 16 hours ago | parent | next [-] | | Isn't GIT you can have a statement that is valid in a system, but can't be proven this way or that given the systems' axioms? And this is true for all such axiom systems? In other words the axioms are an incomplete description of the system. Maybe the problem is axiomative deduction, we need a new inference-ology? | |
| ▲ | bananaflag 15 hours ago | parent | prev | next [-] | | No, this sort of self-reflection is exactly what makes Gödel/Turing/etc impossibility results work ("strange loops" and all that). | | |
| ▲ | vasco 15 hours ago | parent [-] | | Can you explain further? Maybe I'm too out of this scope but if you want to simulate Universe X plus the computer Y that simulates X then you'd need at least 1 extra bit of memory (likely way more) to encompass the simulation plus the computation running the simulation (X+Y). The computer running the simulation by definition is not part of the simulation, so how can it be that it can truly simulate itself? | | |
| ▲ | vbard 5 hours ago | parent | next [-] | | In Men in Black II (2002), Will Smith learned that a miniature civilization existed in a locker. Later, Will Smith’s character learned that our civilization was in a locker of a larger civilization. By thinking of memory usage, you’re restricting yourself to our perceived physical limits withine our perceived reality. But, what if the things running the simulation did not have those limits? E.g. maybe data could be stored in an infinite number of multiverses outside of the infinite simulations being discussed. Any of the simulations could potentially simulate universes like ours while still allowing those simulations to contain others, to be contained by others, to have references to others, to have reflective references, etc. The makes anything and everything possible while not necessarily removing limits we have in our own simulation. It just depends on what’s running the simulation. | | | |
| ▲ | blovescoffee 13 hours ago | parent | prev | next [-] | | Not quite, compression enables you to simulate / represent / encode x data with less than x memory. | | |
| ▲ | stevesimmons 7 hours ago | parent | next [-] | | Only for those inputs that are compressible. If a compressor can compress every input of length N bits into fewer than N bits, then at least 2 of the 2^N possible inputs have the same output. Thus there cannot exist a universal compressor. Modify as desired for fractional bits. The essential argument is the same. | |
| ▲ | qingcharles 3 hours ago | parent | prev [-] | | Would the compressibility of the state of the universe be useful to prove whether we are in a simulation already? (i.e. it is hard to compress data that is already compressed) |
| |
| ▲ | lascargroup 13 hours ago | parent | prev [-] | | Roughly speaking, Gödel encoded (or “simulated”) the formal part of mathematics within arithmetic (using operations such as addition and multiplication), and constructed a sentence that says “this sentence is unprovable” within that simulation. |
|
| |
| ▲ | tsimionescu 12 hours ago | parent | prev | next [-] | | Godel's incompleteness theorem is about the limits of proof / mathematical knowledge. Algebra is still useful and true, even though the proof shows it must be incomplete. | |
| ▲ | anthk 16 hours ago | parent | prev [-] | | Any decent Lisp can reimplement eval, apply and the rest of functions/atom within itself. |
|
|
| ▲ | ericpauley 16 hours ago | parent | prev | next [-] |
| It’s also a little silly for the same reasons discussions of theoretical computability often are: time and space requirements. In practice the Universe, even if computable, is so complex that simulating it would require far more compute than physical particles and far more time than remaining until heat death. |
| |
| ▲ | Borg3 15 hours ago | parent | next [-] | | Hehe yeah.. For me, its just inverted search for the God. There must be somethink behind it, if its not God, then it must be simulation! Kinda sad, I would expect more from scientist. The big riddle of Universe is, how all that matter loves to organize itself, from basic particles to Atoms, basic molecues, structured molecues, things and finally live.. Probably unsolvable, but that doesnt mean we shouldnt research and ask questions... | | |
| ▲ | Aerroon 15 hours ago | parent | next [-] | | >The big riddle of Universe is, how all that matter loves to organize itself, from basic particles to Atoms, basic molecues, structured molecues, things and finally live.. Probably unsolvable, but that doesnt mean we shouldnt research and ask questions... Isn't that 'just' the laws of nature + the 2nd law of thermodynamics? Life is the ultimate increaser of entropy, because for all the order we create we just create more disorder. Conway's game of life has very simple rules (laws of nature) and it ends up very complex. The universe doing the same thing with much more complicated rules seems pretty natural. | | |
| ▲ | estearum 15 hours ago | parent [-] | | Yeah, agreed. The actual real riddle is consciousness. Why does it seems some configurations of this matter and energy zap into existence something that actually (allegedly) did not exist in its prior configuration. | | |
| ▲ | A_D_E_P_T 15 hours ago | parent | next [-] | | I'd argue that it's not that complicated. That if something meets the below five criteria, we must accept that it is conscious: (1) It maintains a persisting internal model of an environment, updated from ongoing input. (2) It maintains a persisting internal model of its own body or vehicle as bounded and situated in that environment. (3) It possesses a memory that binds past and present into a single temporally extended self-model. (4) It uses these models with self-derived agency to generate and evaluate counterfactuals: Predictions of alternative futures under alternative actions. (i.e. a general predictive function.) (5) It has control channels through which those evaluations shape its future trajectories in ways that are not trivially reducible to a fixed reflex table. This would also indicate that Boltzmann Brains are not conscious -- so it's no surprise that we're not Boltzmann Brains, which would otherwise be very surprising -- and that P-Zombies are impossible by definition. I've been working on a book about this for the past three years... | | |
| ▲ | jsenn 14 hours ago | parent | next [-] | | If you remove the terms "self", "agency", and "trivially reducible", it seems to me that a classical robot/game AI planning algorithm, which no one thinks is conscious, matches these criteria. How do you define these terms without begging the question? | | |
| ▲ | A_D_E_P_T 8 hours ago | parent [-] | | If anything has, minimally, a robust spatiotemporal sense of itself, and can project that sense forward to evaluate future outcomes, then it has a robust "self." What this requires is a persistent internal model of: (A) what counts as its own body/actuators/sensors (a maintained self–world boundary), (B) what counts as its history in time (a sense of temporal continuity), and (C) what actions it can take (degrees of freedom, i.e. the future branch space), all of which are continuously used to regulate behavior under genuine epistemic uncertainty. When (C) is robust, abstraction and generalization fall out naturally. This is, in essence, sapience. By "not trivially reducible," I don't mean "not representable in principle." I mean that, at the system's own operative state/action abstraction, its behavior is not equivalent to executing a fixed policy or static lookup table. It must actually perform predictive modeling and counterfactual evaluation; collapsing it to a reflex table would destroy the very capacities above. (It's true that with an astronomically large table you can "look up" anything -- but that move makes the notion of explanation vacuous.) Many robots and AIs implement pieces of this pipeline (state estimation, planning, world models,) but current deployed systems generally lack a robust, continuously updated self-model with temporally deep, globally integrated counterfactual control in this sense. If you want to simplify it a bit, you could just say that you need a robust and bounded spatial-temporal sense, coupled to the ability to generalize from that sense. |
| |
| ▲ | turtleyacht 3 hours ago | parent | prev | next [-] | | Is there a working title or some way to follow for updates? | |
| ▲ | dllthomas 14 hours ago | parent | prev | next [-] | | > so it's no surprise that we're not Boltzmann Brains I think I agree you've excluded them from the definition, but I don't see why that has an impact on likelihood. | |
| ▲ | squibonpig 11 hours ago | parent | prev [-] | | I don't think any of these need to lead to qualia for any obvious reason. It could be a p-zombie why not. | | |
| ▲ | A_D_E_P_T 8 hours ago | parent [-] | | The zombie intuition comes from treating qualia as an "add-on" rather than as the internal presentation of a self-model. "P-zombie" is not a coherent leftover possibility once you fix the full physical structure. If a system has the full self-model (temporal-spatial sense) / world-model / memory binding / counterfactual evaluator / control loop, then that structure is what having experience amounts to (no extra ingredient need be added or subtracted). I hope I don't later get accused of plagiarizing myself, but let's embark on a thought experiment. Imagine a bitter, toxic alkaloid that does not taste bitter. Suppose ingestion produces no distinctive local sensation at all – no taste, no burn, no nausea. The only "response" is some silent parameter in the nervous system adjusting itself, without crossing the threshold of conscious salience. There are such cases: Damaged nociception, anosmia, people congenitally insensitive to pain. In every such case, genetic fitness is slashed. The organism does not reliably avoid harm. Now imagine a different design. You are a posthuman entity whose organic surface has been gradually replaced. Instead of a tongue, you carry an in‑line sensor which performs a spectral analysis of whatever you take in. When something toxic is detected, a red symbol flashes in your field of vision: “TOXIC -- DO NOT INGEST.” That visual event is a quale. It has a minimally structured phenomenal character -- colored, localized, bound to alarm -- and it stands in for what once was bitterness. We can push this further. Instead of a visual alert, perhaps your motor system simply locks your arm; perhaps your global workspace is flooded with a gray, oppressive feeling; perhaps a sharp auditory tone sounds in your private inner ear. Each variant is still a mode of felt response to sensory information. Here's what I'm getting at with this: There is no way for a conscious creature to register and use risky input without some structure of "what it is like" coming along for the ride. |
|
| |
| ▲ | morpheos137 12 hours ago | parent | prev [-] | | There is no objective evidence consciousness exists as distinct from an information process. | | |
| ▲ | TheOtherHobbes 12 hours ago | parent [-] | | There is no objective evidence of anything at all. It all gets filtered through consciousness. "Objectivity" really means a collection of organisms having (mostly) the same subjective experiences, and building the same models, given the same stimuli. Given that less intelligent organisms build simpler models with poorer abstractions and less predictive power, it's very naive to assume that our model-making systems aren't similarly crippled in ways we can't understand. Or imagine. | | |
| ▲ | morpheos137 10 hours ago | parent [-] | | That's a hypothesis but the alternate hypothesis that consciousness is not well defined is equally valid at this point. Occam's razor suggests consciousness doesn't exist since it isn't necessary and isn't even mathematically or physically definable. |
|
|
|
| |
| ▲ | nick__m 14 hours ago | parent | prev | next [-] | | For me the biggest riddle is: why something instead of nothing ? That's the question that prevent me from being atheist and shift me to agnosticism. | | | |
| ▲ | vasco 15 hours ago | parent | prev | next [-] | | > The big riddle of Universe is, how A lot of people are more interested in the Why of the Universe than the How, though. How is an implementation detail, Why is "profound". At least that's how I think most people look at it. | |
| ▲ | mensetmanusman 15 hours ago | parent | prev [-] | | You expect scientists to not ask :‘what is behind all this?’ Ha |
| |
| ▲ | FabHK 16 hours ago | parent | prev | next [-] | | Yes, is that (obvious) point being addressed in the paper? At first skimming, it just says that a "sufficiently souped up laptop" could, in principle, compute the future of the universe (i.e. Laplace's daemon), but I haven't seen anything about the subsequent questions of time scales. | | |
| ▲ | qingcharles 3 hours ago | parent [-] | | Computing the future is cool, but computing the past state is also really cool as it essentially allows time travel into (a copy of) the past. |
| |
| ▲ | skeledrew 12 hours ago | parent | prev | next [-] | | You're predicating on particles, heat death, etc as you understand it being applicable to any potential universe. Such rules are only known to apply in this universe. A universe is simply a function, and a function can be called multiple times with the same/different arguments, and there can be different functions taking the same or different arguments. | |
| ▲ | vidarh 12 hours ago | parent | prev | next [-] | | The issue with that in terms of the simulation argument, is that the simulation argument doesn't require a complete simulation in either space or time. | | |
| ▲ | TheOtherHobbes 12 hours ago | parent [-] | | It also doesn't require a super-universe with identical properties and constraints. There's no guarantee their logic is the same as our logic. It needs to be able to simulate our logic, but that doesn't mean it's defined or bound by it. |
| |
| ▲ | Traubenfuchs 15 hours ago | parent | prev [-] | | The real universe might be different and far more complex than our simulated reality. Maybe a species that can freely move within 4 or 5 dimensions is simulating our 3D + uni directional time reality just like we „simulate“ reality with Sim City and Sims. | | |
| ▲ | mrwrong 14 hours ago | parent [-] | | but then we don't have a universe simulating itself, but simulating a low-fi imitation |
|
|
|
| ▲ | 6 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | NoahZuniga 16 hours ago | parent | prev | next [-] |
| Thanks for this great comment! > He also uses Rice’s theorem (old) to show that there is no uniform measure over the set of "possible universes." I assume a finite uniform measure? Presumably |set| is a uniform measure over the set of "possible universes". Anyway if I understood that correctly, than this is not that surprising? There isn't a finite uniform measure over the real line. If you only consider the possible universes of two particles at any distance from eachother, this models the real line and therefore has no finite uniform measure. |
| |
|
| ▲ | bsenftner 15 hours ago | parent | prev [-] |
| Okay, here's the thing: this is creating revenue, this is fascinating literature for a huge class of armchair scientists that want to believe, want to play with these mental toys, and are willing to pay for the ability to fantasize with ideas they are incapable of developing on their own. This is ordinary capitalism, spinning revenues out of sellable stories. |