Quantum Zeno Effect Solved

Lurking amidst the mass chaos of information that exists in our reality is a little gem of a concept called the Quantum Zeno Effect.  It is partially named after ancient Greek philosopher Zeno of Elea, who dreamed up a number of paradoxes about the fluidity of motion and change.  For example, the “Arrow Paradox” explores the idea that if you break down time into “instants” of zero duration, motion cannot be observed.  Thus, since time is composed of a set of instants, motion doesn’t truly exist.  We might consider Zeno to have been far ahead of his time as he appeared to be thinking about discrete systems and challenging the continuity of space and time a couple thousand years before Alan Turing resurrected the idea in relation to quantum mechanics: “It is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, one second, tends to one as N tends to infinity; that is, that continual observations will prevent motion …”.  The term “Quantum Zeno Effect” was first used by physicists George Sudarshan and Baidyanath Misra in 1977 to describe just such a system – one that does not change state because it is continuously observed.

The challenge with this theory has been in devising experiments that can verify or falsify it.  However, technology has caught up to philosophy and, over the last 25 years, a number of experiments have been performed which seem to validate the effect.  In 2001, for example, physicist Mark Raizen and a team at the University of Texas showed that the effect is indeed real and the transition of states in a system can be either slowed down or sped up simply by taking measurements of the system.

I have enjoyed making a hobby of fully explaining quantum mechanics anomalies with the programmed reality theory.   Admittedly, I don’t always fully grasp some of the deep complexities and nuances of the issues that I am tackling, due partly to the fact that I have a full time job that has naught to do with this stuff, and partly to the fact that my math skills are a bit rusty, but thus far, it doesn’t seem to make a difference.  The more I dig in to each issue, the more I find things that simply support the idea that we live in a digital (and programmed) reality.

The quantum Zeno effect might not be observed in every case.  It only works for non-memoryless processes.  Exponential decay, for instance, is an example of a memoryless system.  Frequent observation of a particle undergoing radioactive decay would not affect the result.  [As an aside, I find it very interesting that a “memoryless system” invokes the idea of a programmatic construct.  Perhaps with good reason…]

A system with memory, or “state”, however, is, in theory, subject to the quantum Zeno effect.  It will manifest itself by appearing to reset the experiment clock every time an observation is made of the state of the system.  The system under test will have a characteristic set of changes that vary over time.  In the case of the University of Texas experiment, trapped ions tended to remain in their initial state for a brief interval or so before beginning to change state via quantum tunneling, according to some probability function.  For the sake of developing a clear illustration, let’s imagine a process whereby a particle remains in its initial quantum state (let’s call it State A) for 2 seconds before probabilistically decaying to its final state (B) according to a linear function over the next second.  Figure A shows the probability of finding the particle in State A as a function of time.  For the first 2 seconds, of course, it has a 0% probability of changing state, and between 2 and 3 seconds it has an equal probability of moving to state B at any point in time.  A system with this behavior, left on its own and measured at any point after 3 seconds, will be in State B.

probability

What happens, however, when you make a measurement of that system, to check and see if it changed state, at t=1 second?  Per the quantum Zeno effect, the experiment clock will effectively be reset and now the system will stay in State A from t=1 to t=3 and then move to state B at some point between t=3 and t=4.  If you make another measurement of the system at t=1, the clock will again reset, delaying the behavior by another second.  In fact, if you continue to measure the state of the system every second, it will never change state.  Note that this has absolutely nothing to do with the physical impact of the measurement itself; a 100% non-intrusive observation will have exactly the same result.

Also note that, it isn’t that the clock doesn’t reset for a memoryless system, but rather, that it doesn’t matter because you cannot observe any difference.  One may argue that if you make observations at the Planck frequency (one per jiffy), even a memoryless sytem might never change state.  This actually approaches the true nature of Zeno’s arguments, but that is a topic for another essay, one that is much more philosophical than falsifiable.  In fact, “Quantum Zeno Effect” is a misnomer.  The non-memoryless system described above really has little to do with the ad infinitum inspection of Zeno’s paradoxes, but we are stuck with the name.  And I digress.

So why would this happen?

It appears to be related in some way to the observer effect and to entanglement:

  • Observer Effect – Once observed, the state of a system changes.
  • Entanglement – Once observed, the states of multiple particles (or, rather, the state of a system of multiple particles) are forever connected.
  • Quantum Zeno – Once observed, the state of a system is reset.

What is common to all three of these apparent quantum anomalies is the coupling of the act of observation with the concept of a state.  For the purposes of this discussion, it will be useful to invoke the computational concept of a finite state machine, which is a system that changes state according to a set of logic rules and some input criteria.

I have explained the Observer effect and Entanglement as logical necessities of an efficient programmed reality system.  What about Quantum Zeno?  Why would it not be just as efficient to start the clock on a process and let it run, independent of observation?

A clue to the answer is that the act of observation appears to create something.

In the Observer effect, it creates the collapse of the probability wave functions and the establishment of definitive properties of certain aspects of the system under observation (e.g. position).  This is not so much a matter of efficiency as it is of necessity, because without probability, free will doesn’t exist and without free will, we can’t learn, and if the purpose of our system is to grow and evolve, then by necessity, observation must collapse probability.

In Entanglement, the act of observation may create the initiation of a state machine, which subsequently determines the behavior of the particles under test.  Those particles are just data, as I have shown, and the data elements are part of the same variable space of the state machine.  They both get updated simultaneously, regardless of the “virtual” distance between them.

So, in Quantum Zeno, the system under test is in probability space.  The act of observation “collapses” this initial probability function and kicks off the mathematical process by which futures states are determined based on the programmed probability function.  But that is now a second level of probability function; call it probability function 2.  Observing this system a second time now must collapse the probability wave function 2.  But to do so means that the system would now have to calculate a modified probability function 3 going forward – one that takes into account the fact that some aspect of the state machine has already been determined (e.g. the system has or hasn’t started its decay).  For non-memoryless systems, this could be an arbitrarily complex function (3) since it may take a different shape for every time at which the observation occurs.  A third measurement complicates the function even further because even more states are ruled out.

On the other hand, it would be more efficient to simply reset the probability function each time an observation is made, due to the efficiency of the reality system.

The only drawback to this algorithm is the fact that smart scientists are starting to notice these little anomalies, although the assumption here is that the reality system “cares.”  It may not.  Or perhaps that is why most natural processes are exponential, or memoryless – it is a further efficiency of the system.  Man-made experiments, however, don’t follow the natural process and may be designed to be arbitrarily complex, which ironically serves to give us this tiny little glimpse into the true nature of reality.

What we are doing here is inferring deep truths about our reality that are in fundamental conflict with the standard materialist view.  This will be happening more and more as time goes forward and physicists and philosophers will soon have no choice but to consider programmed reality as their ToE.

matrix-stoppingreality

RIP Kardashev Civilization Scale

In 1964, Soviet astronomer Nikolai Kardashev proposed a model for categorizing technological civilizations.  He identified 4 levels or “Types”, simplified as follows:

Type 0 – Civilization that has not yet learned to utilize the full set of resources available to them on their home planet (e.g. oceans, tidal forces, geothermal forces, solar energy impinging upon the planet, etc.)

Type 1 – Civilization that fully harnesses, controls, and utilizes the resources of their planet.

Type 2 – Civilization that fully harnesses, controls, and utilizes the resources of their star system.

Type 3 – Civilization that fully harnesses, controls, and utilizes the resources of their galaxy.

halosphere500

As with philosophical thought, literature, art, music, and other concepts and artifacts generated by humanity, technological and scientific pursuits reflect the culture of the time.  In 1964, we were on the brink of nuclear war.  The space race was in full swing and the TV show “Star Trek” was triggering the imagination of laymen and scientists alike.  We thought in terms of conquering people and ideas, and in terms of controlling resources.  What countries are in the Soviet bloc?  What countries are under US influence?  Who has access to most of the oil?  Who has the most gold, the most uranium?

The idea of dominating the world was evident in our news and our entertainment.  Games like Risk and Monopoly were unapologetically imperialistic.  Every Bond plot was about world domination.

Today, many of us find these ideas offensive.  To start with, imperialism is an outdated concept founded on the assumption of superiority of some cultures over others.  The idea of harnessing all planetary resources is an extension of imperialistic mentality, one that adds all other life forms to the entities that we need to dominate.  Controlling planetary resources for the sake of humanity is tantamount to stealing those same resources from other species that may need them.  Further, our attempt to control resources and technology can lead to some catastrophic outcomes.  Nuclear Armageddon, grey goo, overpopulation, global warming, planetary pollution, and (human-caused) mass extinctions are all examples of potentially disastrous consequences of attempts to dominate nature or technology without fully understanding what we are doing.

I argue in “Alien Hunters Still Thinking Inside The Box (or Dyson Sphere)” that attempting to fully harness all of the energy from the sun is increasingly unnecessary and unlikely to our evolution as a species.  Necessary energy consumption per capita is flattening for developing cultures and declining for mature ones.  Technological advances allow us to get much more useful output from our devices as time goes forward.  And humanity is beginning to de-emphasize raw size and power as a desirable attribute (for example, see right sizing economic initiatives) and instead focus on the value of consciousness.

So, certainly the hallmarks of advanced civilizations are not going to be anachronistic metrics of how much energy they can harness.  So what metrics might be useful?

How about:  Have they gotten off their planet?  Have they gotten out of their solar system?  Have they gotten out of their galaxy?

Somehow, I feel that even this is misleading.  Entanglement shows that everything is interconnected.  The observer effect demonstrates that consciousness transcends matter.  So perhaps the truly advanced civilizations have learned that they do not need to physically travel, but rather mentally travel.

How about: How little of an impact footprint do they leave on their planet?

The assumption here is that advanced civilizations follow a curve like the one below, whereby early in their journey they have a tendency to want to consume resources, but eventually evolve to have less and less of a need to consume or use energy.

wigner

How about: What percentage of their effort is expended upon advancing the individual versus the society, the planetary system, or the galactic system?

or…

How about: Who cares?  Why do we need to assign a level to a civilization anyway?  Is there some value to having a master list of evolutionary stage of advanced life forms?  So that we know who to keep an eye on?  That sounds very imperialistic to me.

Of course, I am as guilty of musing about the idea of measuring the level of evolution of a species through a 2013 cultural lens as Kardashev was doing so through a 1964 cultural lens.  But still, it is 50 years hence and time to either revise or retire an old idea.

Flexi Matter

Earlier this year, a team of scientists at the Max Planck Institute of Quantum Optics, led by Randolf Pohl, made a highly accurate calculation of the diameter of a proton and, at .841 fm, it turned out to be 4% less than previously determined (.877 fm).  Trouble is, the previous measurements were also highly accurate.  The significant difference between the two types of measurement was the choice of interaction particle: in the traditional case, electrons, and in Pohl’s case, muons.

Figures have been checked and rechecked and both types of measurements are solid.  All sorts of crazy explanations have been offered up for the discrepancy, but one thing seems certain: we they don’t really understand matter.

Ancient Greeks thought that atoms were indivisible (hence, the name), at least until Rutherford showed otherwise in the early 1900s.  Ancient 20th-century scientists thought that protons were indivisible, at least until Gell-Mann showed otherwise in the 1960s.

So why would it be such a surprise that the diameter of a proton varies with the type of lepton cloud that surrounds and passes through it?  Maybe the proton is flexible, like a sponge, and a muon, at 200 times the weight of an electron, exerts a much higher contractive force on it – gravity, strong nuclear, Jedi, or what have you.  Just make the measurements and modify your theory, guys.  You’ll be .000001% closer to the truth, enough to warrant an even bigger publicly funded particle accelerator.

If particle sizes and masses aren’t invariant, who is to say that they don’t change over time.  Cosmologist Christof Wetterich of the University of Heidelberg thinks this might be possible.  In fact, says Wetterich, if particles are slowly increasing in size, the universe may not be expanding after all.  His recent paper suggests that spectral red shift, Hubble’s famous discovery at Mount Wilson, that led the most widely accepted theory of the universe – the big bang, may actually be due to changing particle sizes over time.  So far, no one has been able to shoot a hole in his theory.

Oops.  “Remember what we said about the big bang being a FACT?  Never mind.”

Flexi-particles.  Now there is both evidence and major philosophical repercussions.

And still, The Universe – Solved! predicts there is no stuff.

The ultimate in flexibility is pure data.

data200

Ever Expanding Horizons

Tribal Era

tribalera200Imagine the human world tens of thousands of years ago.  A tribal community lived together, farming, hunting, trading, and taking care of each other.  There was plenty of land to support the community and as long as there were no strong forces driving them to move, they stayed where they were, content.  As far as they knew, “all that there is” was just that community and the land that was required to sustain it.  We might call this the Tribal Era.

Continental Era

continentalera200But, at some point, for whatever reason – drought, restlessness, desire for a change of scenery – another tribe moved into the first tribe’s territory.  For the first time, that tribe realized that the world was bigger than their little community.  In fact, upon a little further exploration, they realized that the boundaries of “all that there is” just expanded to the continent on which they lived, and there was a plethora of tribes in this new greater community.  The horizon of their reality just reached a new boundary and their community was now a thousand fold larger than before.

Planetary Era

planetaryera200According to researchers, the first evidence of cross-oceanic exploration was about 9000 years ago.  Now, suddenly, this human community may have been subject to an invasion of an entirely different race of people with different languages coming from a place that was previously thought to not exist.  Again, the horizon expands and “all that there is” reaches a new level, one that consists of the entire planet.

Solar Era

The Ancient Greek philosophers and astronomers recognized the existence of other solarera200planets.  Gods were thought to have come from the sun or elsewhere in the heavens, which consisted of a celestial sphere that wasn’t too far out away from the surface of our planet.

Imaginations ran wild as horizons expanded once again.

Galactic Era

galacticera200In 1610, Galileo looked through his telescope and suddenly humanity’s horizon expanded by another level.  Not only did the other planets resemble ours, but it was clear that the sun was the center of the known universe, stars were extremely far away, there were strange distant nebulae that were more than nearby clouds of debris, and the Milky Way consisted of distant stars.  In other worlds, “all that there is” became our galaxy.

Universal Era

universalera200A few centuries later, in 1922, it was time to expand our reality horizon once again, as the 100-inch telescope at Mount Wilson revealed that some of those fuzzy nebulae were actually other galaxies.  The concept of deep space and “Universe” was born and new measurement techniques courtesy of Edwin Hubble showed that “all that there is” was actually billions of times more than previously thought.

Multiversal Era

multiversalera200These expansions of “all that there is” are happening so rapidly now that we are still debating the details about one worldview, while exploring the next, and being introduced to yet another.  Throughout the latter half of the 20th century, a variety of ideas were put forth that expanded our reality horizon to the concept of many (some said infinite) parallel universes.  The standard inflationary big bang theory allowed for multiple Hubble volumes of universes that are theoretically within our same physical space, but unobservable due to the limitations of the speed of light.  Bubble universes, MWI, and many other theories exist but lack any evidence.  In 2003, Max Tegmark framed all of these nicely in his concept of 4 levels of Multiverse.

I sense one of those feelings of acceleration with the respect to the entire concept of expanding horizons, as if our understanding of “all that there is” is growing exponentially.  I was curious to see how exponential it actually was, so I took the liberty of plotting each discrete step in our evolution of awareness of “all that there is” on a logarithmic plot and guess what?

Almost perfectly exponential! (see below)

horizons

Dramatically, the trend points to a new expansion of our horizons in the past 10 years or so.  Could there really be a something beyond a multiverse of infinitely parallel universes?  And has such a concept recently been put forth?

Indeed there is and it has.  And, strangely, it isn’t even something new.  For millennia, the spiritual side of humanity has explored non-physical realities; Shamanism, Heaven, Nirvana, Mystical Experiences, Astral Travel.  Our Western scientific mentality that “nothing can exist that cannot be consistently and reliably reproduced in a lab” has prevented many of us from accepting these notions.  However, there is a new school of thought that is based on logic, scientific studies, and real data (if your mind is open), as well as personal knowledge and experience.  Call it digital physics (Fredkin), digital philosophy, simulation theory (Bostrom), programmed reality (yours truly), or My Big TOE (Campbell).  Tom Campbell and others have taken the step of incorporating into this philosophy the idea of non-material realms.  Which is, in fact, a new expansion of “all that there is.”  While I don’t particularly like the term “dimensional”, I’m not sure that we have a better descriptor.

Interdimensional Era

interdiensionalera200Or maybe we should just call it “All That There Is.”

At least until a few years from now.

Grand Unified Humanity Theory

OK, maybe this post is going to be a little silly – apologies in advance.  I’m in that kind of mood.

Physicists recently created a fascinating concoction – a Bose-Einstein condensate (BEC) that was stable at a temperature 50% higher than critical.  Check out this phys.org article with the deets.  In this bizarre state of matter, all particles act in unison, entangled, as if they were collectively a single particle.  Back in Einstein’s day, BECs were envisioned to be composed of bosons.  Later, theory predicted and experiments demonstrated fermions, and ultimately, atoms.

bose185A comparison is made to an analogous process of getting highly purified water to exist at temperatures above boiling point.  It seems that phase transitions of various types can be pushed beyond their normal critical point if the underlying material is “special” in some way – pure, balanced, coherent.

Superfluids.  Laser light.

It reminds me of the continuous advances in achieving superlative or “perfect” conditions, like superconductivity (zero resistance) at temperatures closer and closer to room.  I then think of a characteristic that new agers ascribe to physical matter – “vibrational levels.”

Always connecting dots, sometimes finding connections that shouldn’t exist.

Given the trend of raising purity, alignment, and coherence in conditions closer and closer to “normal” transitions and scales, might we someday see entangled complex molecules, like proteins?  BECs of DNA strands?

Why stop there?  Could I eventually be my own BEC?  A completely coherent vibrationally-aligned entity?  Cool.  I’ll bet I would be transparent and could walk through doors.

And what if science could figure out how to create a BEC out of all living things?  Nirvana.  Reconnecting with the cosmic consciousness.

Grand Unified Humanity Theory.

Einstein Would Have Loved Programmed Reality

Aren’t we all Albert Einstein fans, in one way or another?  If it isn’t because of his 20th Century revolution in physics (relativity), or his Nobel Prize that led to that other 20th Century revolution (quantum mechanics), or his endless Twainsian witticisms, it’s his underachiever-turned-genius story, or maybe even that crazy head of hair.  For me, it’s his regular-guy sense of humor:

“The hardest thing in the world to understand is the income tax.”

and…

“Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. THAT’S relativity.”

Albert Einstein on a bicycle in Niels Bohr's garden

But, the more I read about Albert and learn about his views on the nature of reality, the more affinity I have with his way of thinking.  He died in 1955, hardly deep enough into the digital age to have had a chance to consider the implications of computing, AI, consciousness, and virtual reality.  Were he alive today, I suspect that he would be a fan of digital physics, digital philosophy, simulism, programmed reality – whatever you want to call it.  Consider these quotes and see if you agree:

“Reality is merely an illusion, albeit a very persistent one.”

“I wished to show that space-time isn’t necessarily something to which one can ascribe a separate existence, independently of the actual objects of physical reality. Physical objects are not in space, but these object are spatially extended. In this way the concept of ’empty space’ loses its meaning.”

As far as the laws of mathematics refer to reality, they are uncertain; and as far as they are certain, they do not refer to reality.”

“A human being is part of a whole, called by us the ‘Universe’ —a part limited in time and space. He experiences himself, his thoughts, and feelings, as something separated from the rest—a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest us. Our task must be to free ourselves from this prison by widening our circles of compassion to embrace all living creatures and the whole of nature in its beauty.”

“Space does not have an independent existence.”

“Hence it is clear that the space of physics is not, in the last analysis, anything given in nature or independent of human thought.  It is a function of our conceptual scheme [mind].”

 “Every one who is seriously involved in the pursuit of science becomes convinced that a spirit is manifest in the laws of the Universe-a spirit vastly superior to that of man, and one in the face of which we with our modest powers must feel humble.”

I can only imagine the insights that Albert would have had into the mysteries of the universe, had he lived well into the computer age.  It would have given him an entirely different perspective on that conundrum that puzzled him throughout his later life – the relationship of consciousness to reality.  And he might have even tossed out the Unified Field Theory that he was forever chasing and settled in on something that looked a little more digital.

 

Bizarro Physics

All sorts of oddities emerge from equations that we have developed to describe reality.  What is surprising is that rather than being simply mathematical artifacts, they actually show up in our physical world.

Perhaps the first such bizarro (see DC Comics) entity was antimatter; matter with an opposite charge and spin.  A mathematical solution to Paul Dirac’s relativistic version of Schrödinger’s equation (it makes my head hurt just looking at it), antimatter was discovered 4 years after Dirac predicted it.

One of last year’s surprises was the negative frequencies that are solutions to Maxwell’s equations and have been shown to reveal themselves in components of light.

And, earlier this month, German physicists announced the ability to create a temperature below absolute zero.

So when we were told in physics class to throw out those “negative” solutions to equations because they were in the imaginary domain, and therefore had no basis in reality…uh, not so fast.

What I find interesting about these discoveries is the implications for the bigger picture.  If our reality were what most of us think it is – 3 dimensions of space, with matter and energy following the rules set forth by the “real” solutions to the equations of physics – one might say that reality trumps the math; that solutions to equations only make sense in the context of describing reality.

However, it appears to be the other way around – math trumps reality.  Solutions to equations previously thought to be in the “imaginary domain” are now being shown to manifest in our reality.

This is one more category of evidence that underlying our apparent reality are data and rules.  The data and rules don’t manifest from the reality; they create the reality.

Bizarro185 antimatter185

Complexity from Simplicity – More Support for a Digital Reality

Simple rules can generate complex patterns or behavior.

For example, consider the following simple rules that, when programmed into a computer, can result in beautiful complex patterns akin to a flock of birds:

1. Steer to avoid crowding local flockmates (separation)
2. Steer towards the average heading of local flockmates (alignment)
3. Steer to move toward the average position (center of mass) of local flockmates (cohesion)

The pseudocode here demonstrates the simplicity of the algorithm.  The following YouTube video is a demonstration of “Boids”, a flocking behavior simulator developed by Craig Reynolds:

Or consider fractals.  The popular Mandelbrot set can be generated with some simple rules, as demonstrated here in 13 lines of pseudocode, resulting in beautiful pictures like this:

http://upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Mandel_zoom_11_satellite_double_spiral.jpg/800px-Mandel_zoom_11_satellite_double_spiral.jpg

Fractals can be used to generate artificial terrain for video games and computer art, such as this 3D mountain terrain generated by the software Terragen:

Terragen-generated mountain terrain

Conways Game of Life uses the idea of cellular automata to generate little 2D pixelated creatures that move, spawn, die, and generally exhibit crude lifelike behavior with 2 simple rules:

1. An alive cell with less than 2 or more than 4 neighbors dies.
2. A dead cell with 3 neighbors turns alive.

Depending on the starting conditions, there may be any number of recognizable resulting simulated organisms; some simple, such as gliders, pulsars, blinkers, glider guns, wickstretchers, and some complex such as puffer trains, rakes, space ship guns, cordon ships, and even objects that appear to travel faster than the maximum propagation speed of the game should allow:

Cellular automata can be extended to 3D space.  The following video demonstrates a 3D “Amoeba” that looks eerily like a real blob of living protoplasm:

What is the point of all this?

Just that you can apply some of these ideas to the question of whether or not reality is continuous or digital (and thus based on bits and rules).  And end up with an interested result.

Consider a hierarchy of complexity levels…

Imagine that each layer is 10 times “zoomed out” from the layer below.  If the root simplicity is at the bottom layer, one might ask how many layers up you have to go before the patterns appear to be natural, as opposed to artificial? [Note: As an aside, we are confusing ideas like natural and artificial.  Is there really a difference?]

The following image is an artificial computer-generated fractal image created by Softology’s “Visions of Chaos” software from a base set of simple rules, yet zoomed out from it’s base level by, perhaps, six orders of magnitude:

softology-hybrid-mandelbulb

In contrast, the following image is an electron microscope-generate image of a real HPV virus:

b-cell-buds-virus_c2005AECO

So, clearly, at six orders of magnitude out from a fundamental rule set, we start to lose the ability to discern “natural” from “artificial.”  Eight orders of magnitude should be sufficient to make natural indistinguishable from artificial.

And yet, our everyday sensory experience is about 36 orders of magnitude above the quantum level.

The deepest level that our instruments can currently image is about 7 levels (10,000,000x magnification) below reality.  This means that if our reality is based on bits and simple rules like those described above, those rules may be operating 15 or more levels below everyday reality.  Given that the quantum level is 36 levels down, we have at least 21 orders of magnitude to play with.  In fact, it may very well be possible that the true granularity of reality is below the quantum level.

In any case, it should be clear to see that we are not even closed to being equipped to visually discern the difference between living in a continuous world or a digital one consisting of bits and rules.

My Body, the Avatar

Have you ever wondered how much information the human brain can store?  A little analysis reveals some interesting data points…

The human brain contains an estimated 100 trillion synapses.  There doesn’t appear to be a finer level of structure to the neural cells, so this represents the maximum number of memory elements that a brain can hold.  Assume for a moment that each synapse can hold a single bit; then the brain’s capacity would be 100 trillion bits, or about 12.5 terabytes. There may be some argument that there is actually a distribution of brain function, or redundancy of data storage, which would reduce the memory capacity of the brain.  On the other hand, one might argue that synapses may not be binary and hence could hold somewhat more information.  So it seems that 12.5 TB is a fairly good and conservative estimate.

It has also been estimated (see “On the Information Processing Capabilities of the Brain: Shifting the Paradigm” by Simon Berkovich) that, in a human lifetime, the brain processes 3 million times that much data.  This all makes sense if we assume that most (99.99997%) of our memory data is discarded over time, due to lack of need.

But then, how would we explain the exceptional capabilities of autistic savants, or people with hyperthymesia, or eidetic memory (total recall).  It would have to be such that the memories that these individuals retrieve can not all be stored in the brain at the same time.  In other words, memories, or the record of our experiences, are not solely stored in the brain.  Some may be, such as those most recently used, or frequently needed.

Those who are trained in Computer Science will recognize the similarities between these characteristics and the idea of a cache memory, a high speed storage device that stores the most recently used, or frequently needed, data for quick access.

As cardiologist and science researcher Pim van Lommel said, “the computer does not produce the Internet any more than the brain produces consciousness.”

Why is this so hard to believe?

After all, there is no real proof that all memories are stored in the brain.  There is only research that shows that some memories are stored in the brain and can be triggered by electrically stimulating certain portions of the cerebral cortex.  By the argument above, I would say that experimental evidence and logic is on the side of non-local memory storage.

In a similar manner, while there is zero evidence that consciousness is an artifact of brain function, Dr. van Lommel has shown that there is extremely strong evidence that consciousness is not a result of brain activity.  It is enabled by the brain, but not seated there.

These two arguments – the non-local seat of consciousness and the non-local seat of memories are congruent and therefore all the more compelling for the case that our bodies are simply avatars.

Things We Can’t Feel – The Mystery Deepens

In my last blog “Things We Can’t See”, we explored the many different ways that our eyes, brains, and/or technology can fool us into seeing something that isn’t there or not seeing something that is.

So apparently, our sense of sight is not necessarily the most reliable sense in terms of identifying what is and isn’t in our objective reality.  We would probably suspect that our sense of touch is fairly foolproof; that is, if an object is “there”, we can “feel” it, right?

Not so fast.

First of all, we have a lot of the same problems with the brain as we did with the sense of sight.  The brain processes all of that sensory data from our nerve endings.  How do we know what the brain really does with that information?  Research shows that sometimes your brain can think that you are touching something that you aren’t or vice versa.  People who have lost limbs still have sensations in their missing extremities.  Hypnosis has been shown to have a significant effect in terms of pain control, which seems to indicate the mind’s capacity to override one’s tactile senses.  And virtual reality experiments have demonstrated the ability for the mind to be fooled into feeling something that isn’t there.

In addition, technology can be made to create havoc with our sense of touch, although the most dramatic of such effects are dozens of years into the future.  Let me explain…

Computer Scientist J. Storrs Hall developed the concept of a “Utility Fog.”  Imagine a “nanoscopic” object called a Foglet, which is an intelligent nanobot, capable of communicating with its peers and having arms that can hook together to form larger structures.  Trillions of these Foglets could conceivably fill a room and not be at all noticeable as long as they were in “invisible mode.”  In fact, not only might they be programmed to appear transparent to the sight, but they may be imperceptible to the touch.  This is not hard to imagine, if you allow that they could have sensors that detect your presence.  For example, if you punch your fist into a swarm of nanobots programmed to be imperceptible, they would sense your motion and move aside as you swung your fist through the air.  But at any point, they could conspire to form a structure – an impenetrable wall, for example.  And then your fist would be well aware of their existence.  In this way, technology may be able to have a dramatic effect on our complete ability to determine what is really “there.”

nanobot

But even now, long before nanobot swarms are possible, the mystery really begins, as we have to dive deeply into what is meant by “feeling” something.

Feeling is the result of a part of our body coming in contact with another object.  That contact is “felt” by the interaction between the molecules of the body and the molecules of the object.

Even solid objects are mostly empty space.  If subatomic particles, such as neutrons, are made of solid mass, like little billiard balls, then 99.999999999999% of normal matter would still be empty space.  That is, of course, unless those particles themselves are not really solid matter, in which case, even more of space is truly empty, more about which in a bit.

So why don’t solid objects like your fist slide right through other solid objects like bricks?  Because of the repulsive effect that the electromagnetic force from the electrons in the fist apply against the electromagnetic force from the electrons in the brick.

But what about that neutron?  What is it made of?  Is it solid?  Is it made of the same stuff as all other subatomic particles?

The leading theories of matter do not favor the idea that subatomic particles are like little billiard balls of differing masses.  For example, string theorists speculate that all particles are made of the same stuff; namely, vibrating bits of string.  Except that they each vibrate at different frequencies.  Problem is, string theory is purely theoretical and really falls more in the mathematical domain than the scientific domain, inasmuch as there is no supporting evidence for the theory.  If it does turn out to be true, even the neutron is mostly empty space because the string is supposedly one-dimensional, with a theoretical cross section of a Planck length.

Here’s where it gets really interesting…

Neutrinos are an extremely common yet extremely elusive particle of matter.  About 100 trillion neutrinos generated in the sun pass through our bodies every second.  Yet they barely interact at all with ordinary matter.  Neutrino capture experiments consist of configurations such as a huge underground tank containing 100,000 gallons of tetrachloroethylene buried nearly a mile below the surface of the earth.  100 billion neutrinos strike every square centimeter of the tank per second.  Yet, any particular molecule of tetrachloroethylene is likely to interact with a neutrino only once every 10E36 seconds (which is 10 billion billion times the age of the universe).

The argument usually given for the neutrino’s elusiveness is that they are massless (and therefore not easily captured by a nucleus) and charge-less (and therefore not subject to the electromagnetic force).  Then again, photons are massless and charge-less and are easily captured, to which anyone who has spent too much time in the sun can attest.  So there has to be some other reason that we can’t detect neutrinos.  Unfortunately, given the current understanding of particle physics, no good answer is forthcoming.

And then there is dark matter.  This concept is the current favorite explanation for some anomalies around orbital speeds of galaxies.  Gravity can’t explain the anomalies, so dark matter is inferred.  If it really exists, it represents about 83% of the mass in the universe, but doesn’t interact again with any of the known forces with the exception of gravity.  This means that dark matter is all around us; we just can’t see it or feel it.

So it seems that modern physics allows for all sorts of types of matter that we can’t see or feel.  When you get down to it, the reason for this is that we don’t understand what matter is at all.  According to the standard model of physics, particles should have no mass, unless there is a special quantum field that pervades the universe and gives rise to mass upon interacting with those particles.  Unfortunately, for that to have any credibility, the signature particle, the Higgs boson, would have to exist.  Thus far, it seems to be eluding even the most powerful of particle colliders.  One alternative theory of matter has it being an emergent property of particle fluctuations in the quantum vacuum.

For a variety of reasons, some of which are outlined in “The Universe – Solved!” and many others which have come to light since I wrote that book, I suspect that ultimately matter is simply a property of an entity that is described purely by data and a set of rules, driven by a complex computational mechanism.  Our attempt to discover the nature of matter is synonymous with our attempt to discover those rules and associated fundamental constants (data).

In terms of other things that we can’t perceive, new age enthusiasts might call out ghosts, spirits, auras, and all sorts of other mysterious invisible and tenuous entities.

starwarsghosts

Given that we know that things exist that we can’t perceive, one has to wonder if it might be possible for macroscopic objects, or even macroscopic entities that are driven by similar energies as humans, to be made from stuff that we can only tenuously detect, not unlike neutrinos or dark matter.  Scientists speculate about multiple dimensions and parallel universes via Hilbert Space and other such constructs.  If such things exist (and wouldn’t it be hypocritical of anyone to speculate or work out the math for such things if it weren’t possible for them to exist?), the rules that govern our interaction with them, across the dimensions, are clearly not at all understood.  That doesn’t mean that they aren’t possible.

In fact, the scientific world is filled with trends leading toward the implication of an information-based reality.

In which almost anything is possible.