AI-generated (stable diffusion) ge of "cyclon writing with a pen".

The sporadic blog of David J A Cooper. I write sci-fi, teach software engineering, and occasionally say related (or not related) things.

Check out Nova Sapiens: The Believers.

Occam hates your simulated universe

I love the discovery of a new and excitingly nutty cosmic revelation. It’s like unwrapping a present.

Melvin M. Vopson got himself in The Conversation, opining on the old Simulation Hypothesis. I can’t say where the idea truly originated, but Nick Bostrom laid down an argument for it in 2003 (so I’m quite late, as usual). I’ll call it “SH” for my purposes here. I’m also no physicist, but I know some things. The SH is philosophically provocative, but mathematically problematic and scientifically meaningless.

It’s one thing to ask whether our universe is, in some sense, embedded in some higher-order hyper-reality. This may be mathematically implied by certain physical theories, like eternal inflation and string theory (to my non-physicist, popular-science-informed mind, at least). But these theories do not involve simulation. They propose a single set of fundamental laws operating seamlessly across the entire scope of reality. Those laws don’t stop at the boundary between what we can observe and what we cannot, but rather seek to explain that apparent boundary.

The SH, by contrast, sets a hard boundary between the inside and the outside of the simulation. It gives us nothing from which to extrapolate from one to the other, making a mockery of Occam’s Razor.

Maths says go home

SH proponents may assume that a universal simulation would run on some computing device sitting in a place with essentially the same physical laws as the thing it’s simulating. This appears to break information theory. We’re asking that the computer—one small object in a vast universe—be able to simulate an entire, roughly-equivalent universe, which potentially includes other, roughly-equivalent computers. We’re asking for infinite regress, for the computer to be able to simulate everything including itself, simulating everything including itself, simulating everything including itself, ad infinitum. This computer would require infinite storage, and this seems less like a problem for science and engineering, and more like a problem for Futurama’s Professor Farnsworth.

This isn’t just a question of more R&D. A data storage device fundamentally cannot store more data than is needed to describe its own (finite) physical existence, because that’s all it has to work with. Data compression doesn’t change this. Compression merely reduces redundancy, and once redundancy has been eliminated, that’s it. It can’t magically encode arbitrarily large amounts of data into arbitrarily small amounts of data.

You may object that there doesn’t have to be a nested simulation (or that the simulations aren’t necessarily infinitely nested), but the point is that, if the two levels of reality are comparable, then the computer must have this capability, even if unused. If the computer was not able to simulate itself infinitely nested, then that inability must be explained by fundamentally different physical laws.

But—can the computer execute some devious short-cut when a simulated civilisation tries to create an equivalent computer within the simulation? Say it tweaks simulated events to stop such a simulated computer being created, or it fakes the operation of it. Perhaps, but this asks us to believe that the computer has unique cosmic properties. If the computer can contain enough information to store an entire universe, then everything else (of comparable mass) ought to contain similar amounts of information too. The laws of physics apply everywhere. We should expect comparable information density in other non-computing phenomena: clouds, ant colonies, the magnetic field lines of stars, etc. And then we’re back to the problem of storing infinite information.

The emperor’s new reality

So we can throw away the idea that any “parent universe” has the same physical laws as our (ostensibly simulated) universe. But beyond that, we have no reason to believe it would even remotely resemble our universe. It would need a certain level of complexity to support a universe simulation, but beyond that, it could have any physical laws and structures whatsoever. Extrapolation is impossible. It may have infinite dimensionality, including multiple or even infinite time dimensions, or it may even have zero dimensionality, with its dynamics tied up in other abstract structures. It may contain uncomputable dynamics able to solve halting problems, so that the “computer” may not even be a Turing Machine—the mathematical basis of all computers we’re capable of creating. We have no basis for any assumptions about it at all. The apparatus on which our simulated universe runs would itself operate according to rules that are completely unknown to us.

And while this may all feel philosophically enthralling, blowing our minds with infinite possibilities, it obliterates any possible investigation. Our chances of developing a testable theory of universe simulation, without any leads, are zero. We can’t possibly know what to look for. To Occam’s Razor, a hypothesis of “infinite possibilities” is no hypothesis at all. It is the scientific equivalent of gibberish.

Occam’s Razor, of course, demands that we choose the simplest theory that explains the evidence. It’s not enough that a theory merely fits the evidence; it has to be the simplest version of itself that fits the evidence. Newton’s First Law, for instance, emphatically doesn’t contain an extra proviso that an object’s path through space will spontaneously trace out the shape of a rabbit when nobody’s looking. Objects could be doing that—you don’t know—but it’s of no use to us to entertain such speculation.

Our one hypothetical chance would be if the “parent universe” intruded into the “simulation” in some way. That is, say we found a way of accessing the computer’s camera or microphone (or equivalent) from within our simulation. We could then build up a body of direct observational evidence of the parent universe. We wouldn’t know what we’d found, at first (given that the parent universe’s laws could literally be anything), but we might eventually work it out. And unless and until such a thing comes to our attention, we have nothing whatsoever to go on.

Almost there, just an entire universe to go

To reflect on the actual state of the art for a moment: simulating even the tiniest parts of our universe already taxes our computational designs to their limits. We struggle to create an accurate simulation of more than two or three elementary particles at a time, let alone the 1080 (a number so large it can’t even be easily named). The complexity of quantum chromodynamics (QCD) within a single proton or neutron—not even a whole atom—demands a supercomputer to generate even an approximate prediction. The differential equations governing the laws of physics, as we know them, are known to be unsolvable in the general case.

The only way we can simulate anything at all, in practice, is by throwing out most of the detail. Given that we nonetheless see all that detail around us, right down to the untold trillions of subatomic particles interacting to form structures and substances of bewildering complexity, this clearly cannot be anything like the approach to simulation that we know.

I gather that SH proponents appeal to the inevitable march of technological progress to suggest that advanced aliens would have obviously been able to solve these challenges. This argument rests on an unearned faith in the inevitability of future discoveries, as though it’s axiomatic that we inhabit the Star Trek universe, and all that remains is a tidy schedule of progress from here to the warp drive, transporters and replicators of the Enterprise—the things we’ve been told to expect by storytellers.

You can’t know what a discovery “will be” before it’s been made.


Posted

in

by

Tags: