January 24, 2012 2 Comments
Programmed Reality has been an incredibly successful concept in terms of explaining the paradoxes and anomalies of Quantum Mechanics, including non-Reality, non-Locality, the Observer Effect, Entanglement, and even the Retrocausality of John Wheeler’s Delayed Choice Quantum Eraser experiment.
I came up with those explanations by thinking about how Programmed Reality could explain such curiosities.
But I thought it might be interesting to view the problem in the reverse manner. If one were to design a universe-simulating Program, what kinds of curiosities might result from an efficient design? (Note: I fully realize that any entity advanced enough to simulate the universe probably has a computational engine that is far more advanced that we can even imagine; most definitely not of the von-Neumann variety. Yet, we can only work with what we know, right?)
So, if I were to create such a thing, for instance, I would probably model data in the following manner:
For any space unobserved by a conscious entity, there is no sense in creating the reality for that space in advance. It would unnecessarily consume too many resources.
For example, consider the cup of coffee on your desk. Is it really necessary to model every single subatomic particle in the cup of coffee in order to interact with it in the way that we do? Of course not. The total amount of information contained in that cup of coffee necessary to stimulate our senses in the way that it does (generate the smell that it does; taste the way it does; feel the way it does as we drink it; swish around in the cup the way that it does; have the little nuances, like tiny bubbles, that make it look real; have the properties of cooling at the right rate to make sense, etc.) might be 10MB or so. Yet, the total potential information content in a cup of coffee is 100,000,000,000 MB, so there is a ratio of perhaps 100 trillion in compression that can be applied to an ordinary object.
But once you decide to isolate an atom in that cup of coffee and observe it, the Program would then have to establish a definitive position for that atom, effectively resulting in the collapse of the wave function, or decoherence. Moreover, the complete behavior of the atom, at that point, might be forever under control of the program. After all, why delete the model once observed, in the event (probably fairly likely) that it will be observed again at some point in the future. Thus, the atom would have to be described by a finite state machine. It’s behavior would be decided by randomly picking values of the parameters that drive that behavior, such as atomic decay. In other words, we have created a little mini finite state machine.
So, the process of “zooming in” on reality in the Program would have to result in exactly the type of behavior observed by quantum physicists. In other words, in order to be efficient, resource-wise, the Program decoheres only the space and matter that it needs to.
Let’s say we zoom in on two particles at the same time; two that are in close proximity to each other. Both would have to be decohered by the Program. The decoherence would result in the creation of two mini finite state machines. Using the same random number seed for both will cause the state machines to forever behave in an identical manner.
No matter how far apart you take the particles. i.e…
So, Observer Effect and Entanglement might both be necessary consequences of an efficient Programmed Reality algorithm.