Einstein Would Have Loved Programmed Reality

Aren’t we all Albert Einstein fans, in one way or another?  If it isn’t because of his 20th Century revolution in physics (relativity), or his Nobel Prize that led to that other 20th Century revolution (quantum mechanics), or his endless Twainsian witticisms, it’s his underachiever-turned-genius story, or maybe even that crazy head of hair.  For me, it’s his regular-guy sense of humor:

“The hardest thing in the world to understand is the income tax.”

and…

“Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. THAT’S relativity.”

Albert Einstein on a bicycle in Niels Bohr's garden

But, the more I read about Albert and learn about his views on the nature of reality, the more affinity I have with his way of thinking.  He died in 1955, hardly deep enough into the digital age to have had a chance to consider the implications of computing, AI, consciousness, and virtual reality.  Were he alive today, I suspect that he would be a fan of digital physics, digital philosophy, simulism, programmed reality – whatever you want to call it.  Consider these quotes and see if you agree:

“Reality is merely an illusion, albeit a very persistent one.”

“I wished to show that space-time isn’t necessarily something to which one can ascribe a separate existence, independently of the actual objects of physical reality. Physical objects are not in space, but these object are spatially extended. In this way the concept of ’empty space’ loses its meaning.”

As far as the laws of mathematics refer to reality, they are uncertain; and as far as they are certain, they do not refer to reality.”

“A human being is part of a whole, called by us the ‘Universe’ —a part limited in time and space. He experiences himself, his thoughts, and feelings, as something separated from the rest—a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest us. Our task must be to free ourselves from this prison by widening our circles of compassion to embrace all living creatures and the whole of nature in its beauty.”

“Space does not have an independent existence.”

“Hence it is clear that the space of physics is not, in the last analysis, anything given in nature or independent of human thought.  It is a function of our conceptual scheme [mind].”

 “Every one who is seriously involved in the pursuit of science becomes convinced that a spirit is manifest in the laws of the Universe-a spirit vastly superior to that of man, and one in the face of which we with our modest powers must feel humble.”

I can only imagine the insights that Albert would have had into the mysteries of the universe, had he lived well into the computer age.  It would have given him an entirely different perspective on that conundrum that puzzled him throughout his later life – the relationship of consciousness to reality.  And he might have even tossed out the Unified Field Theory that he was forever chasing and settled in on something that looked a little more digital.

 

Plato’s Cave, Flatlanders, and Us

The Allegory of the Cave was an allegorical scenario and dialog described by Plato in his work “The Republic.”  In it, a number of prisoners occupy a cave and are forced to only look in the direction of a wall.  Behind them is a huge fire.  Between the prisoners and the fire, people walk along a walkway, their shadows being cast upon the wall and echoes of the sounds of their footsteps reflecting off the wall.  Given that the prisoners have been in that position for their entire lives, this is their entire reality.  They have built a reality around the shadows and sounds emanating from the wall.  Their “futurists” are the ones who can best predict the next shadow.  Plato then imagines what might happen if a prisoner were released and free to discover the truth about the world; what created the shadows, and what lies beyond the cave.  If he attempted to explain the truth behind the “shadow reality” to his former fellow prisoners, he would likely be shunned as they would fear and ridicule his outlandish perspective.

platoscave

In 1884, Edwin Abbott Abbott wrote a novella called “Flatland: A Romance of Many Dimensions” in which the characters lived in a two-dimensional world.  Originally intended to be a social satire about Victorian culture, it is now often referenced by scientists and mathematicians who imagine the possibilities of higher dimensions.  In Flatland, the Flatlanders can’t conceive of a reality with three dimensions.  When a sphere visits their world, all they can perceive is a 2D slice of the sphere and so they remain unconvinced that higher dimensions could exist.  Interestingly, even the sphere denies the possibility of spatial dimensions higher than three, despite his conviction in his argument with the flatlanders that there is a spatial dimension higher than their two.  It seems that everyone is stuck in their physical reality, with little imagination nor open-mindedness to the possibilities of a greater reality.

Flatlanders

We are amused as we read these stories.  But are we any different?  Have we become any more enlightened as to other possibilities since Plato’s time?  In some contexts, perhaps.  Believers in some new age philosophies, followers of some ancient eastern or shamanic traditions, certain practitioners of the use of entheogenic plants, and even fundamentalists in western monotheistic religions will acknowledge that our reality is but a subset of a much greater one.  But that is the spiritual side of the great divide.  From a scientific perspective, there are very few who appear to be willing to think outside the physical reality box.

Physicist Thomas Campbell, in his “My Big TOE”, and Steven Kaufman, in his “Unified Reality Theory” have developed comprehensive theories based on experience and rigorous logic which demonstrate that our physical experience is but a tiny subset of a much larger and more complex reality.  But how many scientists and rational thinkers buy into the idea?  Not many.  They are too busy living in Flatland.  Or Plato’s Cave.

Reality_Systems

Bizarro Physics

All sorts of oddities emerge from equations that we have developed to describe reality.  What is surprising is that rather than being simply mathematical artifacts, they actually show up in our physical world.

Perhaps the first such bizarro (see DC Comics) entity was antimatter; matter with an opposite charge and spin.  A mathematical solution to Paul Dirac’s relativistic version of Schrödinger’s equation (it makes my head hurt just looking at it), antimatter was discovered 4 years after Dirac predicted it.

One of last year’s surprises was the negative frequencies that are solutions to Maxwell’s equations and have been shown to reveal themselves in components of light.

And, earlier this month, German physicists announced the ability to create a temperature below absolute zero.

So when we were told in physics class to throw out those “negative” solutions to equations because they were in the imaginary domain, and therefore had no basis in reality…uh, not so fast.

What I find interesting about these discoveries is the implications for the bigger picture.  If our reality were what most of us think it is – 3 dimensions of space, with matter and energy following the rules set forth by the “real” solutions to the equations of physics – one might say that reality trumps the math; that solutions to equations only make sense in the context of describing reality.

However, it appears to be the other way around – math trumps reality.  Solutions to equations previously thought to be in the “imaginary domain” are now being shown to manifest in our reality.

This is one more category of evidence that underlying our apparent reality are data and rules.  The data and rules don’t manifest from the reality; they create the reality.

Bizarro185 antimatter185

Complexity from Simplicity – More Support for a Digital Reality

Simple rules can generate complex patterns or behavior.

For example, consider the following simple rules that, when programmed into a computer, can result in beautiful complex patterns akin to a flock of birds:

1. Steer to avoid crowding local flockmates (separation)
2. Steer towards the average heading of local flockmates (alignment)
3. Steer to move toward the average position (center of mass) of local flockmates (cohesion)

The pseudocode here demonstrates the simplicity of the algorithm.  The following YouTube video is a demonstration of “Boids”, a flocking behavior simulator developed by Craig Reynolds:

Or consider fractals.  The popular Mandelbrot set can be generated with some simple rules, as demonstrated here in 13 lines of pseudocode, resulting in beautiful pictures like this:

http://upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Mandel_zoom_11_satellite_double_spiral.jpg/800px-Mandel_zoom_11_satellite_double_spiral.jpg

Fractals can be used to generate artificial terrain for video games and computer art, such as this 3D mountain terrain generated by the software Terragen:

Terragen-generated mountain terrain

Conways Game of Life uses the idea of cellular automata to generate little 2D pixelated creatures that move, spawn, die, and generally exhibit crude lifelike behavior with 2 simple rules:

1. An alive cell with less than 2 or more than 4 neighbors dies.
2. A dead cell with 3 neighbors turns alive.

Depending on the starting conditions, there may be any number of recognizable resulting simulated organisms; some simple, such as gliders, pulsars, blinkers, glider guns, wickstretchers, and some complex such as puffer trains, rakes, space ship guns, cordon ships, and even objects that appear to travel faster than the maximum propagation speed of the game should allow:

Cellular automata can be extended to 3D space.  The following video demonstrates a 3D “Amoeba” that looks eerily like a real blob of living protoplasm:

What is the point of all this?

Just that you can apply some of these ideas to the question of whether or not reality is continuous or digital (and thus based on bits and rules).  And end up with an interested result.

Consider a hierarchy of complexity levels…

Imagine that each layer is 10 times “zoomed out” from the layer below.  If the root simplicity is at the bottom layer, one might ask how many layers up you have to go before the patterns appear to be natural, as opposed to artificial? [Note: As an aside, we are confusing ideas like natural and artificial.  Is there really a difference?]

The following image is an artificial computer-generated fractal image created by Softology’s “Visions of Chaos” software from a base set of simple rules, yet zoomed out from it’s base level by, perhaps, six orders of magnitude:

softology-hybrid-mandelbulb

In contrast, the following image is an electron microscope-generate image of a real HPV virus:

b-cell-buds-virus_c2005AECO

So, clearly, at six orders of magnitude out from a fundamental rule set, we start to lose the ability to discern “natural” from “artificial.”  Eight orders of magnitude should be sufficient to make natural indistinguishable from artificial.

And yet, our everyday sensory experience is about 36 orders of magnitude above the quantum level.

The deepest level that our instruments can currently image is about 7 levels (10,000,000x magnification) below reality.  This means that if our reality is based on bits and simple rules like those described above, those rules may be operating 15 or more levels below everyday reality.  Given that the quantum level is 36 levels down, we have at least 21 orders of magnitude to play with.  In fact, it may very well be possible that the true granularity of reality is below the quantum level.

In any case, it should be clear to see that we are not even closed to being equipped to visually discern the difference between living in a continuous world or a digital one consisting of bits and rules.

The Digital Reality Bandwagon

I tend to think that reality is just data.  That the fundamental building blocks of matter and space will ultimately be shown to be bits, nothing more.  Those who have read my book, follow this blog, or my Twitter feed, realize that this has been a cornerstone of my writing since 2006.

Not that I was the first to think of any of this.  Near as I can tell, Phillip K. Dick may deserve that credit, having said “We are living in a computer programmed reality” in 1977, although I am sure that someone can find some Shakespearean reference to digital physics (“O proud software, that simulates in wanton swirl”).

Still, a mere six years ago, it was a lonely space to be in.  The few digital reality luminaries at that time included:

But since then…

– MIT Engineering Professor Seth Lloyd published “Programming the Universe” in 2006, asserting that the universe is a massive quantum computer running a cosmic program.

– Nuclear physicist Thomas Campbell published his excellent unifying theory “My Big TOE” in 2007.

– Brian Whitworth, PhD. authored a paper containing evidence that our reality is programmed: “The emergence of the physical world from information processing”, Quantum Biosystems 2010, 2 (1) 221-249  http://arxiv.org/abs/0801.0337

– University of Maryland physicist, Jim Gates, discovered error-correction codes in the laws of physics. See “Symbols of Power”, Physics World, Vol. 23, No 6, June 2010.

– Fermilab astrophysicist, Craig Hogan, speculated that space is quantized.  This was based on results from GEO600 measurements in 2010.  See: http://www.wired.com/wiredscience/2010/10/holometer-universe-resolution/.  A holometer experiment is being constructed to test: http://holometer.fnal.gov/

– Rich Terrile, director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory, hypothesized that we are living in a simulated reality. http://www.vice.com/read/whoa-dude-are-we-inside-a-computer-right-now-0000329-v19n9

– Physicists Leonard Susskind ad Gerard t’Hooft, developed the holographic black hole physics theory (our universe is digitally encoded on the surface of a black hole).

Even mainstream media outlets are dipping a toe into the water to see what kinds of reactions they get, such as this recent article in New Scientist Magazine: http://www.newscientist.com/article/mg21528840.800-reality-is-everything-made-of-numbers.html

So, today, I feel like I am in really great company and it is fun to watch all of the futurists, philosophers, and scientists jump on the new digital reality bandwagon.  The plus side will include the infusion of new ideas and the resulting synthesis of theory, as well as pushing the boundaries of experimental validation.  The down side will be all of the so-called experts jockeying for position.  In any case, it promises to be a wild ride, one that should last the twenty or so years it will take to create the first full-immersion reality simulation.  Can’t wait.

The Ultimate Destiny of the Nature of Matter is Something Very Familiar

Extrapolation is a technique for projecting a trend into the future.  It has been used liberally by economists, futurists, and other assorted big thinkers for many years, to project population growth, food supply, market trends, singularities, technology directions, skirt lengths, and other important trends.  It goes something like this:

If a city’s population has been growing linearly by 10% per year for many years, one can safely predict that it will be around 10% higher next year, 21% higher in two years, and so on.  Or, if chip density has been increasing by a factor of 2 every two years (as it has for the past 40), one can predict that it will be 8 times greater than today in three years (Moore’s Law).  Ray Kurzweil and other Singularity fans extrapolate technology trends to conclude that our world as we know it will come to an end in 2045 in the form of a technological singularity.  Of course there are always unknown and unexpected events that can cause these predictions to be too low or too high, but given the information that is known today, it is still a useful technique.

To my knowledge, extrapolation has not really been applied to the problem that I am about to present, but I see no reason why it couldn’t give an interesting projection…

…for the nature of matter.

In ancient Greece, Democritus put forth the idea that solid objects were comprised of atoms of that element or material, either jammed tightly together, as in the case of a solid object, or separated by a void (space).  These atoms were thought to be little indivisible billiard-ball-like objects made of some sort of “stuff.”  Thinking this through a bit, it was apparent that if atoms were thought to be spherical and they were crammed together in an optimal fashion, then matter was essentially 74% of the space that it takes up, the rest being air, or empty space.  So, for example, a solid bar of gold was really only 74% gold “stuff,” at most.

That view of matter was resurrected by John Dalton in the early 1800s and revised once J. J. Thomson discovered electrons.  At that point, atoms were thought to look like plum pudding, with electrons embedded in the proton pudding.  Still, the density of “stuff” didn’t change, at least until the early 1900s when Ernest Rutherford determined that atoms were actually composed of a tiny dense nucleus and a shell of electrons.  Further measurements revealed that these subatomic particles (protons, electrons, and later, neutrons) were actually very tiny compared to the overall atom and, in fact, most of the atom was empty space.  That model, coupled with a realization that atoms in a solid actually had to have some distance between them, completely changed our view on how dense matter was.  It turned out that in our gold bar only 1 part in 10E15 was “stuff.”

That was, until the mid-60’s, when quark theory was proposed, which said that protons and neutrons were actually comprised of three quarks each.  As the theory (aka QCD) is now fairly accepted and some measurement estimates have been made of quark sizes, one can calculate that since quarks are between a thousand and a million times smaller than the subatomic particles that they make up, matter is now 10E9 to 10E18 times more tenuous than previously thought.  Hence our gold bar is now only about 1 part in 10E30 (give or take a few orders of magnitude) “stuff” and the rest in empty space.  By way of comparison, about 1.3E32 grains of sand would fit inside the earth.  So matter is roughly as dense with “stuff” as one grain of sand is to our entire planet.

So now we have three data points to start our extrapolation.  Since the percentage of “stuff” that matter is made of is shrinking exponentially over time, we can’t plot our trend in normal scales, but need to use log-log scales.

And now, of course, we have string theory, which says that all subatomic particles are really just bits of string vibrating at specific frequencies, each string possibly having a width of the Planck length.  If so, that would make subatomic particles all but 1E-38 empty space, leaving our gold bar with just 1 part in 1E52 of “stuff”.

Gets kind of ridiculous doesn’t it?  Doesn’t anyone see where this is headed?

In fact, if particles are comprised of strings, why do we even need the idea of “stuff?”  Isn’t it enough to define the different types of matter by the single number – the frequency at which the string vibrates?

What is matter anyway?  It is a number assigned to a type of object that has to do with how that object behaves in a gravitational field.  In other words, it is just a rule.

We don’t really experience matter.  What we experience is electromagnetic radiation influenced by some object that we call matter (visual).  And the effect of the electromagnetic force rule due to the repulsion of charges between the electron shells of the atoms in our fingers and the electron shells of the atoms in the object (tactile).

In other words, rules.

In any case, if you extrapolate our scientific progress, it is easy to see that the ratio of “stuff” to “space” is trending toward zero.  Which means what?

That matter is most likely just data.  And the forces that cause us to experience matter the way we do are just rules about how data interacts with itself.

Data and Rules – that’s all there is.

Oh yeah, and Consciousness.

goldbar185

My Body, the Avatar

Have you ever wondered how much information the human brain can store?  A little analysis reveals some interesting data points…

The human brain contains an estimated 100 trillion synapses.  There doesn’t appear to be a finer level of structure to the neural cells, so this represents the maximum number of memory elements that a brain can hold.  Assume for a moment that each synapse can hold a single bit; then the brain’s capacity would be 100 trillion bits, or about 12.5 terabytes. There may be some argument that there is actually a distribution of brain function, or redundancy of data storage, which would reduce the memory capacity of the brain.  On the other hand, one might argue that synapses may not be binary and hence could hold somewhat more information.  So it seems that 12.5 TB is a fairly good and conservative estimate.

It has also been estimated (see “On the Information Processing Capabilities of the Brain: Shifting the Paradigm” by Simon Berkovich) that, in a human lifetime, the brain processes 3 million times that much data.  This all makes sense if we assume that most (99.99997%) of our memory data is discarded over time, due to lack of need.

But then, how would we explain the exceptional capabilities of autistic savants, or people with hyperthymesia, or eidetic memory (total recall).  It would have to be such that the memories that these individuals retrieve can not all be stored in the brain at the same time.  In other words, memories, or the record of our experiences, are not solely stored in the brain.  Some may be, such as those most recently used, or frequently needed.

Those who are trained in Computer Science will recognize the similarities between these characteristics and the idea of a cache memory, a high speed storage device that stores the most recently used, or frequently needed, data for quick access.

As cardiologist and science researcher Pim van Lommel said, “the computer does not produce the Internet any more than the brain produces consciousness.”

Why is this so hard to believe?

After all, there is no real proof that all memories are stored in the brain.  There is only research that shows that some memories are stored in the brain and can be triggered by electrically stimulating certain portions of the cerebral cortex.  By the argument above, I would say that experimental evidence and logic is on the side of non-local memory storage.

In a similar manner, while there is zero evidence that consciousness is an artifact of brain function, Dr. van Lommel has shown that there is extremely strong evidence that consciousness is not a result of brain activity.  It is enabled by the brain, but not seated there.

These two arguments – the non-local seat of consciousness and the non-local seat of memories are congruent and therefore all the more compelling for the case that our bodies are simply avatars.

Things We Can’t Feel – The Mystery Deepens

In my last blog “Things We Can’t See”, we explored the many different ways that our eyes, brains, and/or technology can fool us into seeing something that isn’t there or not seeing something that is.

So apparently, our sense of sight is not necessarily the most reliable sense in terms of identifying what is and isn’t in our objective reality.  We would probably suspect that our sense of touch is fairly foolproof; that is, if an object is “there”, we can “feel” it, right?

Not so fast.

First of all, we have a lot of the same problems with the brain as we did with the sense of sight.  The brain processes all of that sensory data from our nerve endings.  How do we know what the brain really does with that information?  Research shows that sometimes your brain can think that you are touching something that you aren’t or vice versa.  People who have lost limbs still have sensations in their missing extremities.  Hypnosis has been shown to have a significant effect in terms of pain control, which seems to indicate the mind’s capacity to override one’s tactile senses.  And virtual reality experiments have demonstrated the ability for the mind to be fooled into feeling something that isn’t there.

In addition, technology can be made to create havoc with our sense of touch, although the most dramatic of such effects are dozens of years into the future.  Let me explain…

Computer Scientist J. Storrs Hall developed the concept of a “Utility Fog.”  Imagine a “nanoscopic” object called a Foglet, which is an intelligent nanobot, capable of communicating with its peers and having arms that can hook together to form larger structures.  Trillions of these Foglets could conceivably fill a room and not be at all noticeable as long as they were in “invisible mode.”  In fact, not only might they be programmed to appear transparent to the sight, but they may be imperceptible to the touch.  This is not hard to imagine, if you allow that they could have sensors that detect your presence.  For example, if you punch your fist into a swarm of nanobots programmed to be imperceptible, they would sense your motion and move aside as you swung your fist through the air.  But at any point, they could conspire to form a structure – an impenetrable wall, for example.  And then your fist would be well aware of their existence.  In this way, technology may be able to have a dramatic effect on our complete ability to determine what is really “there.”

nanobot

But even now, long before nanobot swarms are possible, the mystery really begins, as we have to dive deeply into what is meant by “feeling” something.

Feeling is the result of a part of our body coming in contact with another object.  That contact is “felt” by the interaction between the molecules of the body and the molecules of the object.

Even solid objects are mostly empty space.  If subatomic particles, such as neutrons, are made of solid mass, like little billiard balls, then 99.999999999999% of normal matter would still be empty space.  That is, of course, unless those particles themselves are not really solid matter, in which case, even more of space is truly empty, more about which in a bit.

So why don’t solid objects like your fist slide right through other solid objects like bricks?  Because of the repulsive effect that the electromagnetic force from the electrons in the fist apply against the electromagnetic force from the electrons in the brick.

But what about that neutron?  What is it made of?  Is it solid?  Is it made of the same stuff as all other subatomic particles?

The leading theories of matter do not favor the idea that subatomic particles are like little billiard balls of differing masses.  For example, string theorists speculate that all particles are made of the same stuff; namely, vibrating bits of string.  Except that they each vibrate at different frequencies.  Problem is, string theory is purely theoretical and really falls more in the mathematical domain than the scientific domain, inasmuch as there is no supporting evidence for the theory.  If it does turn out to be true, even the neutron is mostly empty space because the string is supposedly one-dimensional, with a theoretical cross section of a Planck length.

Here’s where it gets really interesting…

Neutrinos are an extremely common yet extremely elusive particle of matter.  About 100 trillion neutrinos generated in the sun pass through our bodies every second.  Yet they barely interact at all with ordinary matter.  Neutrino capture experiments consist of configurations such as a huge underground tank containing 100,000 gallons of tetrachloroethylene buried nearly a mile below the surface of the earth.  100 billion neutrinos strike every square centimeter of the tank per second.  Yet, any particular molecule of tetrachloroethylene is likely to interact with a neutrino only once every 10E36 seconds (which is 10 billion billion times the age of the universe).

The argument usually given for the neutrino’s elusiveness is that they are massless (and therefore not easily captured by a nucleus) and charge-less (and therefore not subject to the electromagnetic force).  Then again, photons are massless and charge-less and are easily captured, to which anyone who has spent too much time in the sun can attest.  So there has to be some other reason that we can’t detect neutrinos.  Unfortunately, given the current understanding of particle physics, no good answer is forthcoming.

And then there is dark matter.  This concept is the current favorite explanation for some anomalies around orbital speeds of galaxies.  Gravity can’t explain the anomalies, so dark matter is inferred.  If it really exists, it represents about 83% of the mass in the universe, but doesn’t interact again with any of the known forces with the exception of gravity.  This means that dark matter is all around us; we just can’t see it or feel it.

So it seems that modern physics allows for all sorts of types of matter that we can’t see or feel.  When you get down to it, the reason for this is that we don’t understand what matter is at all.  According to the standard model of physics, particles should have no mass, unless there is a special quantum field that pervades the universe and gives rise to mass upon interacting with those particles.  Unfortunately, for that to have any credibility, the signature particle, the Higgs boson, would have to exist.  Thus far, it seems to be eluding even the most powerful of particle colliders.  One alternative theory of matter has it being an emergent property of particle fluctuations in the quantum vacuum.

For a variety of reasons, some of which are outlined in “The Universe – Solved!” and many others which have come to light since I wrote that book, I suspect that ultimately matter is simply a property of an entity that is described purely by data and a set of rules, driven by a complex computational mechanism.  Our attempt to discover the nature of matter is synonymous with our attempt to discover those rules and associated fundamental constants (data).

In terms of other things that we can’t perceive, new age enthusiasts might call out ghosts, spirits, auras, and all sorts of other mysterious invisible and tenuous entities.

starwarsghosts

Given that we know that things exist that we can’t perceive, one has to wonder if it might be possible for macroscopic objects, or even macroscopic entities that are driven by similar energies as humans, to be made from stuff that we can only tenuously detect, not unlike neutrinos or dark matter.  Scientists speculate about multiple dimensions and parallel universes via Hilbert Space and other such constructs.  If such things exist (and wouldn’t it be hypocritical of anyone to speculate or work out the math for such things if it weren’t possible for them to exist?), the rules that govern our interaction with them, across the dimensions, are clearly not at all understood.  That doesn’t mean that they aren’t possible.

In fact, the scientific world is filled with trends leading toward the implication of an information-based reality.

In which almost anything is possible.

The Observer Effect and Entanglement are Practically Requirements of Programmed Reality

Programmed Reality has been an incredibly successful concept in terms of explaining the paradoxes and anomalies of Quantum Mechanics, including non-Reality, non-Locality, the Observer Effect, Entanglement, and even the Retrocausality of John Wheeler’s Delayed Choice Quantum Eraser experiment.

I came up with those explanations by thinking about how Programmed Reality could explain such curiosities.

But I thought it might be interesting to view the problem in the reverse manner.  If one were to design a universe-simulating Program, what kinds of curiosities might result from an efficient design?  (Note: I fully realize that any entity advanced enough to simulate the universe probably has a computational engine that is far more advanced that we can even imagine; most definitely not of the von-Neumann variety.  Yet, we can only work with what we know, right?)

So, if I were to create such a thing, for instance, I would probably model data in the following manner:

For any space unobserved by a conscious entity, there is no sense in creating the reality for that space in advance.  It would unnecessarily consume too many resources.

For example, consider the cup of coffee on your desk.  Is it really necessary to model every single subatomic particle in the cup of coffee in order to interact with it in the way that we do?  Of course not.  The total amount of information contained in that cup of coffee necessary to stimulate our senses in the way that it does (generate the smell that it does; taste the way it does; feel the way it does as we drink it; swish around in the cup the way that it does; have the little nuances, like tiny bubbles, that make it look real; have the properties of cooling at the right rate to make sense, etc.) might be 10MB or so.  Yet, the total potential information content in a cup of coffee is 100,000,000,000 MB, so there is a ratio of perhaps 100 trillion in compression that can be applied to an ordinary object.

But once you decide to isolate an atom in that cup of coffee and observe it, the Program would then have to establish a definitive position for that atom, effectively resulting in the collapse of the wave function, or decoherence.  Moreover, the complete behavior of the atom, at that point, might be forever under control of the program.  After all, why delete the model once observed, in the event (probably fairly likely) that it will be observed again at some point in the future.  Thus, the atom would have to be described by a finite state machine.  It’s behavior would be decided by randomly picking values of the parameters that drive that behavior, such as atomic decay.  In other words, we have created a little mini finite state machine.

So, the process of “zooming in” on reality in the Program would have to result in exactly the type of behavior observed by quantum physicists.  In other words, in order to be efficient, resource-wise, the Program decoheres only the space and matter that it needs to.

Let’s say we zoom in on two particles at the same time; two that are in close proximity to each other.  Both would have to be decohered by the Program.  The decoherence would result in the creation of two mini finite state machines.  Using the same random number seed for both will cause the state machines to forever behave in an identical manner.

No matter how far apart you take the particles.  i.e…

Entanglement!

So, Observer Effect and Entanglement might both be necessary consequences of an efficient Programmed Reality algorithm.

 

coffee185 entanglement185

Yesterday’s Sci-Fi is Tomorrow’s Technology

It is the end of 2011 and it has been an exciting year for science and technology.  Announcements about artificial life, earthlike worlds, faster-than-light particles, clones, teleportation, memory implants, and tractor beams have captured our imagination.  Most of these things would have been unthinkable just 30 years ago.

So, what better way to close out the year than to take stock of yesterday’s science fiction in light of today’s reality and tomorrow’s technology.  Here is my take:

yesterdaysscifi