Remix.run Logo
New mathematical framework reshapes debate over simulation hypothesis(santafe.edu)
74 points by Gooblebrai 16 hours ago | 96 comments
anthk 13 hours ago | parent | next [-]

Arxiv.org PDF:

https://arxiv.org/pdf/2404.16050

qingcharles 25 minutes ago | parent | prev | next [-]

Simulating a/the universe, and simulating the universe at-or-above realtime are also two separate things.

A non-realtime simulation would allow you certain solutions (such as perfectly recreating a past state of the current universe), but might not allow you to practically see a future state.

A_D_E_P_T 15 hours ago | parent | prev | next [-]

Oh man, Stephen Wolfram and Jürgen Schmidthuber are probably fuming at the fact that this is called a "new" mathematical framework. It's all very old, and quite conventional, even popular -- not exactly the road not taken.

What the author did was use the Physical Church-Turing thesis, and Kleene's second recursion theorem, to show that: (1) If a universe’s dynamics are computable (PCT), and (2) the universe can implement universal computation (RPCT), then (3) the universe can simulate itself, including the computer doing the simulating.

That's basically all. And thus "there would be two identical instances of us, both equally 'real'." (Two numerically distinct processes are empirically identical if they are indistinguishable. You might remember this sort of thing from late 20th c. philosophy coursework.)

He also uses Rice’s theorem (old) to show that there is no uniform measure over the set of "possible universes."

It's all very interesting, but it's more a review article than a "new mathematical framework." The notion of a mathematical/simulated universe is as old as Pythagoras (~550 BC), and Rice, Church-Turing, and Kleene are all approaching the 100-year mark.

HPsquared 14 hours ago | parent | next [-]

I'm no mathematician, but doesn't this come up against Gödel's incompleteness theorem? My brain has that roughly as "If you have a system and a model of that system, but the model is also part of the same system, something something, impossible"

keepamovin 14 hours ago | parent | next [-]

Isn't GIT you can have a statement that is valid in a system, but can't be proven this way or that given the systems' axioms? And this is true for all such axiom systems? In other words the axioms are an incomplete description of the system.

Maybe the problem is axiomative deduction, we need a new inference-ology?

bananaflag 14 hours ago | parent | prev | next [-]

No, this sort of self-reflection is exactly what makes Gödel/Turing/etc impossibility results work ("strange loops" and all that).

vasco 13 hours ago | parent [-]

Can you explain further?

Maybe I'm too out of this scope but if you want to simulate Universe X plus the computer Y that simulates X then you'd need at least 1 extra bit of memory (likely way more) to encompass the simulation plus the computation running the simulation (X+Y). The computer running the simulation by definition is not part of the simulation, so how can it be that it can truly simulate itself?

vbard 4 hours ago | parent | next [-]

In Men in Black II (2002), Will Smith learned that a miniature civilization existed in a locker. Later, Will Smith’s character learned that our civilization was in a locker of a larger civilization.

By thinking of memory usage, you’re restricting yourself to our perceived physical limits withine our perceived reality.

But, what if the things running the simulation did not have those limits? E.g. maybe data could be stored in an infinite number of multiverses outside of the infinite simulations being discussed. Any of the simulations could potentially simulate universes like ours while still allowing those simulations to contain others, to be contained by others, to have references to others, to have reflective references, etc. The makes anything and everything possible while not necessarily removing limits we have in our own simulation. It just depends on what’s running the simulation.

blovescoffee 11 hours ago | parent | prev | next [-]

Not quite, compression enables you to simulate / represent / encode x data with less than x memory.

qingcharles an hour ago | parent | next [-]

Would the compressibility of the state of the universe be useful to prove whether we are in a simulation already? (i.e. it is hard to compress data that is already compressed)

stevesimmons 5 hours ago | parent | prev [-]

Only for those inputs that are compressible.

If a compressor can compress every input of length N bits into fewer than N bits, then at least 2 of the 2^N possible inputs have the same output. Thus there cannot exist a universal compressor.

Modify as desired for fractional bits. The essential argument is the same.

lascargroup 11 hours ago | parent | prev [-]

Roughly speaking, Gödel encoded (or “simulated”) the formal part of mathematics within arithmetic (using operations such as addition and multiplication), and constructed a sentence that says “this sentence is unprovable” within that simulation.

tsimionescu 11 hours ago | parent | prev | next [-]

Godel's incompleteness theorem is about the limits of proof / mathematical knowledge. Algebra is still useful and true, even though the proof shows it must be incomplete.

anthk 14 hours ago | parent | prev [-]

Any decent Lisp can reimplement eval, apply and the rest of functions/atom within itself.

ericpauley 14 hours ago | parent | prev | next [-]

It’s also a little silly for the same reasons discussions of theoretical computability often are: time and space requirements. In practice the Universe, even if computable, is so complex that simulating it would require far more compute than physical particles and far more time than remaining until heat death.

Borg3 14 hours ago | parent | next [-]

Hehe yeah.. For me, its just inverted search for the God. There must be somethink behind it, if its not God, then it must be simulation! Kinda sad, I would expect more from scientist.

The big riddle of Universe is, how all that matter loves to organize itself, from basic particles to Atoms, basic molecues, structured molecues, things and finally live.. Probably unsolvable, but that doesnt mean we shouldnt research and ask questions...

Aerroon 13 hours ago | parent | next [-]

>The big riddle of Universe is, how all that matter loves to organize itself, from basic particles to Atoms, basic molecues, structured molecues, things and finally live.. Probably unsolvable, but that doesnt mean we shouldnt research and ask questions...

Isn't that 'just' the laws of nature + the 2nd law of thermodynamics? Life is the ultimate increaser of entropy, because for all the order we create we just create more disorder.

Conway's game of life has very simple rules (laws of nature) and it ends up very complex. The universe doing the same thing with much more complicated rules seems pretty natural.

estearum 13 hours ago | parent [-]

Yeah, agreed. The actual real riddle is consciousness. Why does it seems some configurations of this matter and energy zap into existence something that actually (allegedly) did not exist in its prior configuration.

A_D_E_P_T 13 hours ago | parent | next [-]

I'd argue that it's not that complicated. That if something meets the below five criteria, we must accept that it is conscious:

(1) It maintains a persisting internal model of an environment, updated from ongoing input.

(2) It maintains a persisting internal model of its own body or vehicle as bounded and situated in that environment.

(3) It possesses a memory that binds past and present into a single temporally extended self-model.

(4) It uses these models with self-derived agency to generate and evaluate counterfactuals: Predictions of alternative futures under alternative actions. (i.e. a general predictive function.)

(5) It has control channels through which those evaluations shape its future trajectories in ways that are not trivially reducible to a fixed reflex table.

This would also indicate that Boltzmann Brains are not conscious -- so it's no surprise that we're not Boltzmann Brains, which would otherwise be very surprising -- and that P-Zombies are impossible by definition. I've been working on a book about this for the past three years...

jsenn 12 hours ago | parent | next [-]

If you remove the terms "self", "agency", and "trivially reducible", it seems to me that a classical robot/game AI planning algorithm, which no one thinks is conscious, matches these criteria.

How do you define these terms without begging the question?

A_D_E_P_T 6 hours ago | parent [-]

If anything has, minimally, a robust spatiotemporal sense of itself, and can project that sense forward to evaluate future outcomes, then it has a robust "self."

What this requires is a persistent internal model of: (A) what counts as its own body/actuators/sensors (a maintained self–world boundary), (B) what counts as its history in time (a sense of temporal continuity), and (C) what actions it can take (degrees of freedom, i.e. the future branch space), all of which are continuously used to regulate behavior under genuine epistemic uncertainty. When (C) is robust, abstraction and generalization fall out naturally. This is, in essence, sapience.

By "not trivially reducible," I don't mean "not representable in principle." I mean that, at the system's own operative state/action abstraction, its behavior is not equivalent to executing a fixed policy or static lookup table. It must actually perform predictive modeling and counterfactual evaluation; collapsing it to a reflex table would destroy the very capacities above. (It's true that with an astronomically large table you can "look up" anything -- but that move makes the notion of explanation vacuous.)

Many robots and AIs implement pieces of this pipeline (state estimation, planning, world models,) but current deployed systems generally lack a robust, continuously updated self-model with temporally deep, globally integrated counterfactual control in this sense.

If you want to simplify it a bit, you could just say that you need a robust and bounded spatial-temporal sense, coupled to the ability to generalize from that sense.

turtleyacht 2 hours ago | parent | prev | next [-]

Is there a working title or some way to follow for updates?

dllthomas 12 hours ago | parent | prev | next [-]

> so it's no surprise that we're not Boltzmann Brains

I think I agree you've excluded them from the definition, but I don't see why that has an impact on likelihood.

squibonpig 10 hours ago | parent | prev [-]

I don't think any of these need to lead to qualia for any obvious reason. It could be a p-zombie why not.

A_D_E_P_T 6 hours ago | parent [-]

The zombie intuition comes from treating qualia as an "add-on" rather than as the internal presentation of a self-model.

"P-zombie" is not a coherent leftover possibility once you fix the full physical structure. If a system has the full self-model (temporal-spatial sense) / world-model / memory binding / counterfactual evaluator / control loop, then that structure is what having experience amounts to (no extra ingredient need be added or subtracted).

I hope I don't later get accused of plagiarizing myself, but let's embark on a thought experiment. Imagine a bitter, toxic alkaloid that does not taste bitter. Suppose ingestion produces no distinctive local sensation at all – no taste, no burn, no nausea. The only "response" is some silent parameter in the nervous system adjusting itself, without crossing the threshold of conscious salience. There are such cases: Damaged nociception, anosmia, people congenitally insensitive to pain. In every such case, genetic fitness is slashed. The organism does not reliably avoid harm.

Now imagine a different design. You are a posthuman entity whose organic surface has been gradually replaced. Instead of a tongue, you carry an in‑line sensor which performs a spectral analysis of whatever you take in. When something toxic is detected, a red symbol flashes in your field of vision: “TOXIC -- DO NOT INGEST.” That visual event is a quale. It has a minimally structured phenomenal character -- colored, localized, bound to alarm -- and it stands in for what once was bitterness.

We can push this further. Instead of a visual alert, perhaps your motor system simply locks your arm; perhaps your global workspace is flooded with a gray, oppressive feeling; perhaps a sharp auditory tone sounds in your private inner ear. Each variant is still a mode of felt response to sensory information. Here's what I'm getting at with this: There is no way for a conscious creature to register and use risky input without some structure of "what it is like" coming along for the ride.

morpheos137 11 hours ago | parent | prev [-]

There is no objective evidence consciousness exists as distinct from an information process.

TheOtherHobbes 10 hours ago | parent [-]

There is no objective evidence of anything at all.

It all gets filtered through consciousness.

"Objectivity" really means a collection of organisms having (mostly) the same subjective experiences, and building the same models, given the same stimuli.

Given that less intelligent organisms build simpler models with poorer abstractions and less predictive power, it's very naive to assume that our model-making systems aren't similarly crippled in ways we can't understand.

Or imagine.

morpheos137 8 hours ago | parent [-]

That's a hypothesis but the alternate hypothesis that consciousness is not well defined is equally valid at this point. Occam's razor suggests consciousness doesn't exist since it isn't necessary and isn't even mathematically or physically definable.

nick__m 12 hours ago | parent | prev | next [-]

For me the biggest riddle is: why something instead of nothing ?

That's the question that prevent me from being atheist and shift me to agnosticism.

morpheos137 10 hours ago | parent [-]

There is both in superposition.

vasco 13 hours ago | parent | prev | next [-]

> The big riddle of Universe is, how

A lot of people are more interested in the Why of the Universe than the How, though.

How is an implementation detail, Why is "profound". At least that's how I think most people look at it.

mensetmanusman 13 hours ago | parent | prev [-]

You expect scientists to not ask :‘what is behind all this?’

Ha

FabHK 14 hours ago | parent | prev | next [-]

Yes, is that (obvious) point being addressed in the paper? At first skimming, it just says that a "sufficiently souped up laptop" could, in principle, compute the future of the universe (i.e. Laplace's daemon), but I haven't seen anything about the subsequent questions of time scales.

qingcharles an hour ago | parent [-]

Computing the future is cool, but computing the past state is also really cool as it essentially allows time travel into (a copy of) the past.

skeledrew 10 hours ago | parent | prev | next [-]

You're predicating on particles, heat death, etc as you understand it being applicable to any potential universe. Such rules are only known to apply in this universe.

A universe is simply a function, and a function can be called multiple times with the same/different arguments, and there can be different functions taking the same or different arguments.

vidarh 11 hours ago | parent | prev | next [-]

The issue with that in terms of the simulation argument, is that the simulation argument doesn't require a complete simulation in either space or time.

TheOtherHobbes 10 hours ago | parent [-]

It also doesn't require a super-universe with identical properties and constraints.

There's no guarantee their logic is the same as our logic. It needs to be able to simulate our logic, but that doesn't mean it's defined or bound by it.

Traubenfuchs 13 hours ago | parent | prev [-]

The real universe might be different and far more complex than our simulated reality. Maybe a species that can freely move within 4 or 5 dimensions is simulating our 3D + uni directional time reality just like we „simulate“ reality with Sim City and Sims.

mrwrong 12 hours ago | parent [-]

but then we don't have a universe simulating itself, but simulating a low-fi imitation

NoahZuniga 14 hours ago | parent | prev | next [-]

Thanks for this great comment!

> He also uses Rice’s theorem (old) to show that there is no uniform measure over the set of "possible universes."

I assume a finite uniform measure? Presumably |set| is a uniform measure over the set of "possible universes".

Anyway if I understood that correctly, than this is not that surprising? There isn't a finite uniform measure over the real line. If you only consider the possible universes of two particles at any distance from eachother, this models the real line and therefore has no finite uniform measure.

bsenftner 13 hours ago | parent | prev [-]

Okay, here's the thing: this is creating revenue, this is fascinating literature for a huge class of armchair scientists that want to believe, want to play with these mental toys, and are willing to pay for the ability to fantasize with ideas they are incapable of developing on their own. This is ordinary capitalism, spinning revenues out of sellable stories.

CuriouslyC 14 hours ago | parent | prev | next [-]

The simulation hypothesis takes something reasonable, that reality is "virtual," and runs it into absurdity.

If the universe isn't "real" in the materialist sense, that does not imply that there's a "real" universe outside of the one we perceive, nor does it imply that we're being "simulated" by other intelligences.

The path of minimal assumptions from reality not being "real" is idealism. We're not simulated, we're manifesting.

EdgeCaseExist 14 hours ago | parent | next [-]

Exactly, it's paradoxical; how would you define the universe as a simulation, without being on the same substrate! The title should have focused more on the computability of the universe, as we know it.

empiricus 13 hours ago | parent | prev | next [-]

Sorry, I don't understand what you are saying. What do you mean by "something reasonable, that reality is virtual"? In many ways, by definition, reality is what is real not virtual. I have other questions, but this is a good start :)

CuriouslyC 12 hours ago | parent [-]

When I say that reality isn't "real" (which is awkward for sure) what I'm referring to is that we have a perception of space and time which is absolute and inviolable, when it's likely space and time (as we understand them) are artifacts of our perceptual lens, and "reality" is based on something more akin to consensus than immutable laws. From this perspective you could view physics more as a communication/consistency protocol for consciousness than the raw nature of the universe.

empiricus 10 hours ago | parent [-]

Hm, from what I know about physics, time and space are actually much more absolute and inviolable than our imperfect perceptions. the laws are quite different than our intuition, but everything is water-tight and there is no room for any deviation. the smallest of deviations would mean multiple nobel prizes, so ppl are searching really hard to find any, without success. On the other hand, if we talk about our perception, the things we see around us are of course a virtual reality constructed by our brain to model the input from our sensors, but this is normal because there is no alternative. But it seems to me you are saying smth different?

brap 13 hours ago | parent | prev | next [-]

I think the underlying assumption is that we are “real”, meaning our existence is grounded in some undisputed “reality”. So if what we perceive as the universe isn’t real, then there has to be some other real universe that is simulating it in some way.

senkora 11 hours ago | parent | prev [-]

Yep, might as well go straight to the Mathematical Universe Hypothesis:

> Tegmark's MUH is the hypothesis that our external physical reality is a mathematical structure. That is, the physical universe is not merely described by mathematics, but is mathematics — specifically, a mathematical structure. Mathematical existence equals physical existence, and all structures that exist mathematically exist physically as well. Observers, including humans, are "self-aware substructures (SASs)". In any mathematical structure complex enough to contain such substructures, they "will subjectively perceive themselves as existing in a physically 'real' world".

https://en.wikipedia.org/wiki/Mathematical_universe_hypothes...

kazinator 3 hours ago | parent | prev | next [-]

> The simulation hypothesis — the idea that our universe might be an artificial construct running on some advanced alien computer — has long captured the public imagination.

Right; that's the feeble public imagination. What captures my imagination is the idea that the existence of the rules alone is enough to obtain the universe; no simulator is required.

We can make an analogy to a constant like pi. No division has to take place of a circumference by a diameter in order to prop up the existence of pi.

The requirement for a simulator just punts the rock down the road: in what universe is that simulator, and what simulates that? It's an infinite regress. If there is no simulator, that goes away.

If certain equations dictate that you exist and have experiences, then you exist and have experiences in the same way that pi exists.

GistNoesis 12 hours ago | parent | prev | next [-]

The problem of computers is the problem of time : How to obtain a consistent causal chain !

The classical naive way of obtaining a consistent causal chain, is to put the links one after the other following the order defined by the simulation time.

The funnier question is : can it be done another way ? With the advance of generative AI, and things like diffusion model it's proven that it's possible theoretically (universal distribution approximation). It's not so much simulating a timeline, but more sampling the whole timeline while enforcing its physics-law self-consistency from both directions of the causal graph.

In toy models like game of life, we can even have recursivity of simulation : https://news.ycombinator.com/item?id=33978978 unlike section 7.3 of this paper where the computers of the lower simulations are started in ordered-time

In other toy model you can diffusion-model learn and map the chaotic distribution of all possible three-body problem trajectories.

Although sampling can be simulated, the efficient way of doing it necessitate to explore all the possible universes simultaneously like in QM (which we can do by only exploring a finite number of them while bounding the neighbor universe region according to the question we are trying to answer using the Lipschitz continuity property).

Sampling allows you to bound maximal computational usage and be sure to reach your end-time target, but at the risk of not being perfectly physically consistent. Whereas simulating present the risk of the lower simulations siphoning the computational resources and preventing the simulation time to reach its end-time target, but what you could compute is guaranteed consistent.

Sampled bottled universe are ideal for answering question like how many years must a universe have before life can emerge, while simulated bottled universe are like a box of chocolate, you never know what you are going to get.

The question being can you tell which bottle you are currently in, and which bottle would you rather get.

whatever1 4 hours ago | parent | next [-]

Causality also is not a universal thing. Some things just coexist and obey to some laws.

Does the potential cause current? No, they coexist.

asplake 10 hours ago | parent | prev [-]

I’m not sure Einstein would allow your concept of “simulation time”. Events are only partially ordered.

Beijinger 12 hours ago | parent | prev | next [-]

Konrad Zuse was a German pioneer in computing, best known for building the Z3 in 1941—the world's first functional programmable digital computer. Later in his career, he explored profound philosophical and theoretical ideas about the nature of the universe. Rechnender Raum (literally "Computing Space" or "Calculating Space") is the title of his groundbreaking 1969 book (published in the series Schriften zur Datenverarbeitung). In it, Zuse proposed that the entire universe operates as a vast discrete computational process, akin to a giant cellular automaton. He argued that physical laws and reality itself emerge from digital, step-by-step computations on a grid of discrete "cells" in space, rather than from continuous analog processes as traditionally assumed in physics. This idea challenged the prevailing view of continuous physical laws and laid the foundation for what we now call digital physics, pancomputationalism, or the simulation hypothesis (the notion that reality might be a computation, possibly running on some underlying "computer"). Zuse's work is widely regarded as the first formal proposal of digital physics, predating similar ideas by others like Edward Fredkin or Stephen Wolfram.

daoboy 14 hours ago | parent | prev | next [-]

I always feel like these frameworks rely on a semantic sleight of hand that sounds plausible on the surface, but when you drill down a bit they render words like 'simulation' 'reality' or 'truth' as either unintelligible or trite, depending on how you define them.

measurablefunc 5 hours ago | parent [-]

They're defined relative to the axioms. In this case he is using the standard arithmetic & set theoretic constructions to define the terms & functions he's talking about. It's logically sound, whether it makes physical sense or not is another matter.

EdgeCaseExist 14 hours ago | parent | prev | next [-]

The author of the article on the site, is the author of the paper!

mg74 14 hours ago | parent | next [-]

Which of him is simulating which?

DonHopkins 11 hours ago | parent | prev [-]

Department of Research Simulation

therobots927 10 hours ago | parent | prev | next [-]

“ Wolpert shows that this isn’t required by the mathematics: simulations do not have to degrade, and infinite chains of simulated universes remain fully consistent within the theory.”

How is this consistent with the second law of thermodynamics? If there is one universe containing an infinite number of simulations (some of which simulate the base universe) wouldn’t there be a limit to how much computation could be contained? By its very nature a chain of simulations would grow exponentially with time, rapidly accelerating heat death. That may not require the simulations to degrade but it puts a hard limit on how many could be created.

measurablefunc 5 hours ago | parent [-]

Standard theory of computation is not concerned about entropy or physical realizability. It's just arithmetic & lookup tables defined w/ set theoretic axioms.

empiricus 12 hours ago | parent | prev | next [-]

Trying to read the paper... I guess if you ignore the difference between finite and infinite tape Turing machine, and if all physical constraints are outside the scope of the paper, then it is easy to prove the universe can simulate itself.

flufluflufluffy 5 hours ago | parent | prev | next [-]

The whole “simulation hypothesis” thing has always irked me. To me, the question of whether our universe was [“intentionally” “created” by some other “being(s)”] vs [“naturally” happened] is meaningless. Whatever it was on the other side is way too insanely unfathomable to be classified into those 2 human-created ideas. Ugh the whole thing is so self-centered.

morpheos137 4 hours ago | parent [-]

It appeals to sophomoric modern atheists who can't comprehend that infinity and nothing exists at the same time. People seek a reason "why" not realizing the question is the answer. The universe exists because 'why not?' because Infinity seeks to prevail over nothing. Nothing strikes at the heel of infinity. The truth is not in these lines or that theory but betwixt here and there and once "you" realize it, it realizes "you." Because it is you and you are it for it is itself. This may sound like my mumbo jumbo woo but once you know it knows you know it knows you know.

flufluflufluffy 3 hours ago | parent [-]

yes haha, it is mumbo jumbo to the uninitiated (which can mean many different things!)

morpheos137 3 hours ago | parent [-]

You're not unititiated you're just testing the hypothesis. That things—including yourself—seek meaning is the meaning. Math is language, language is math as LLMs are showing us.

quantum_state 14 hours ago | parent | prev | next [-]

Hope folks involved in this type of exploration have it clear in mind that what they are reasoning about it’s strictly the model of the real world only. It’s far from obvious that nature follows anything remotely computational.

kpga 6 hours ago | parent | prev | next [-]

"Example 1. ... After this you physically isolate isolate your laptop, from the rest of the Universe, and start running it..."

However there is no way "you can physically isolate isolate your laptop, from the rest of the Universe" so doesn't that refute this example (at least?)

shtzvhdx 13 hours ago | parent | prev | next [-]

This all assumes there's no computation beyond a Turing machine, right? Therefore, this assumes reality is a simulation on a finite set of rationals?

So, as long as one believes in continuum, this is just toying around?

analog31 12 hours ago | parent [-]

We've yet to propose an experiment that demonstrates the inadequacy of IEEE floats if used carefully. The simulation only needs to be good enough.

nrhrjrjrjtntbt 14 hours ago | parent | prev | next [-]

Like running Kubernetes in a Docker container.

skeledrew 10 hours ago | parent | prev | next [-]

A universe is a function. It only makes sense that a function can call other functions, including itself, ad infinitum. And a function may be called in the same or a different thread.

mgaunard 14 hours ago | parent | prev | next [-]

It's starting with the assumption that the simulation would reproduce the universe perfectly; this eliminates a lot of possibilities.

Many would expect that the parent universe would be more sophisticated, potentially with more dimensions, that we can only glimpse through artifacts of the simulation.

te7447 14 hours ago | parent | next [-]

I've always wondered how you'd be able to rigorously distinguish breaking out of the simulation from just discovering novel things about your current universe.

Is a black hole a bug or a feature? If you find a way to instantly observe or manipulate things at Alpha Centauri by patterning memory in a computer on Earth a special way, is that an exploit or is it just a new law of nature?

Science is a descriptive endeavor.

I guess that some extreme cases would be obvious - if a god-admin shows up and says "cut that out or we'll shut your universe down", that's a better indication of simulation than the examples I gave. But even so, it could be a power bluff, someone pretending to be a god. Or it could be comparable to aliens visiting Earth rather than gods revealing themselves - i.e. some entity of a larger system visiting another entity of the same system, not someone outside it poking inside.

anthk 14 hours ago | parent | prev [-]

Also that Universe could use entities similar to hard and soft links (quantum entanglement), memory deduplication and so on.

How many people did we met in the world with similar face appearances and even personalities, almost like you are finding copycats everywhere? Also, it happens as if some kind of face/shape would just have a single personality with minimal differences spread over thousands of lookalikes...

le-mark 13 hours ago | parent | prev | next [-]

I wonder if there’s a concept akin to Shannon Entropy that dictates the level of detail a simulation can provide given a ratio of bits to something. Although presumably any level of bits could be simulated given more time.

lioeters 10 hours ago | parent [-]

An explanation of the observer effect may be that the universe is lazily evaluated at the moment of observation. Outside of that experienced reality, it might as well be all a cloud of latent possibilities, rough outlines and low-res details, enough for a plausible simulation.

le-mark 7 hours ago | parent [-]

This would allow for a dds attack on reality where a bunch of simulants attempt to perform computationally expensive observations at the same time.

lioeters 44 minutes ago | parent [-]

The Simulators working at the universal data center wondering why this particular server rack is getting hot. "Have you tried turning it off and back on?"

boomskats 15 hours ago | parent | prev | next [-]

Zero cost abstractions! I'd almost be interested in Bostrom's inevitable physics-based counter (if he wasn't such a racist bellend).

thegrim000 9 hours ago | parent | prev | next [-]

Once again, discussion around the simulation hypothesis that for some reason assumes the simulating universe has the exact same laws of physics / reality as the simulated universe. Assuming that the simulated universe can use their mathematics to describe/constrain the simulator universe. It makes no sense to me.

moi2388 11 hours ago | parent | prev | next [-]

Yeah right. In infinite Turing machines maybe. If it’s finite, it’s impossible to simulate something larger with the same fidelity

croes 13 hours ago | parent | prev | next [-]

Related?

> Consequences of Undecidability in Physics on the Theory of Everything

https://news.ycombinator.com/item?id=45770754

jonathanstrange 13 hours ago | parent | prev | next [-]

Here is one thing I don't understand about these kind of approaches. Doesn't a computational simulation imply that time is discrete? If so, doesn't this have consequences for our currently best physical theories? I understand that the discreteness of time would be far below what can be measured right now but AFAIK it would still makes a difference for physical theories whether time is discrete or not. Or am I mistaken about that? There are similar concerns about space.

By the way, on a related note, I once stumbled across a paper that argued that if real numbers where physically realizable in some finite space, then that would violate the laws of thermodynamics. It sounded convincing but I also lacked the physical knowledge to evaluate that thesis.

qayxc 13 hours ago | parent [-]

Time and space aren't well defined, but current models indeed put a discrete limit on both: Planck-Length and Planck-Time (~1.9×10^−43s and ~5.7×10^−35m respectively).

Below these limits, physical descriptions of the world lose meaning, i.e. shorter time spans or distances don't result in measurable changes and our models break down. That doesn't mean these limits are "real" in the sense that space and time are indeed quantised, but experiments and observations end at these limits.

bobbyschmidd 13 hours ago | parent | prev | next [-]

Someone did another 'Kleene-Turing' on the whole issue with "the origin"?

bad bad not good.

morpheos137 13 hours ago | parent | prev | next [-]

These models get things backwards. The universe is a wave function in logic space. It appears discrete and quantized because integers composed of primes are logically stable information entropy minimal nodes. In other words the universe is the way it is because it depends on math. Math does not depend on the universe. Logic is its own "simulation." Math does not illuminate physics, rather physics illuminates math. This can be shown by the construction of a filter that cleanly sorts prime numbers from composites without trial division but by analysis of the entropic harmonics of integers. In other words what we consider integers are not fundamental but rather emergent properties of the minimal subjunctive of superposition of zero (non existence) and infinity (anything that is possible). By ringing an integer like a bell according to the template provided by the zeta function we can find primes and factor from spectral analysis without division. Just as integers emerge from the wave as stable nodes so do quanta in the physical isomorphism. In other words both integers and quanta are emergent from the underlying wave that is information in tension between the polarity of nonexistence and existence. So what appears discrete or simulated is actually an emergent phenomenon of the subjunctive potential of information constrained by the two poles of possibility.

turtleyacht 12 hours ago | parent [-]

Think the leakage is if the simulation were a manufactured emulation, like humans trying to mirror natural laws through technology.

An emergent simulation, nature borne out of nature, may not have those same defects.

morpheos137 11 hours ago | parent [-]

We can prove that the "defects" we see emerge naturally from the entropic optimization of information subject to the superposition of being and not being. Between nothing and everything the universe exists in an entropic gradient.

raverbashing 14 hours ago | parent | prev | next [-]

We can't even run docker inside docker without making things slower, the simulator hypotheses is frankly ridiculous

lioeters 14 hours ago | parent | next [-]

That's what a simulated universe running inside Docker would say.

raverbashing 13 hours ago | parent [-]

Nobody is going to pay all those docker licenses /s

croes 13 hours ago | parent | prev [-]

You would be living inside docker and wouldn’t know how fast the outside is. Maybe lightspeed is a limit inflicted by the simulation.

mw67 14 hours ago | parent | prev [-]

Funny people still call that "simulation hypothesis". At some point they should try to do some Past lives regressions or Out of body experience (astral projection). Then they'll know for sure what this reality is about.

qayxc 13 hours ago | parent | next [-]

I would consider this if someone was able to demonstrate a way to distinguish these phenomena from altered states of mind (i.e. hallucinations). We know and can demonstrate that the human psyche can easily be manipulated in various ways (psychological manipulation, drugs, magnetic fields, sleep depravation, stress, etc.) to cause such experiences.

Some actual evidence for for "past life regressions" and "astral projection" would be nice...

gcost 6 hours ago | parent [-]

PLR is real, read the works of Michael newton and others. Over 8000 PRL from people of all kind of age and background describe the same things happening once we pass on the other side. Definitely not hallucinations. Actually scary how people still think that instead of exploring for themselves.

krzat 11 hours ago | parent | prev [-]

Yeah, from what I heard, that's how scientology recruits true believers.