Macroscopic Coherence Explained

Coherence is a general property of a system whereby the components of that system all act in a similar manner. Coherent light is what makes lasers what they are – an alignment of photons, or waveform phases (why cats chase them is a little harder to explain). Superconductivity, a property of zero resistance to electrical flow that was formerly only observed at temperatures near absolute zero, is closely related in that the atoms of the superconducting material are aligned coherently. Quantum entanglement is an example of perfect coherence between two or more particles, in that they act as a single particle no matter how far away from each other you take them. Einstein famously referred to this property as “spooky action at a distance.” The Bose-Einstein condensate is another state of matter that exists at extremely low temperatures and involves a system of particles that have all achieved the lowest quantum state, and hence, are coherent.

Over the years, clever experimental scientists have pushed the boundaries of coherence from extreme cryogenics and quantum scales to room temperatures and macroscopic scales. Author and fellow truth seeker Anthony Peake posted an article today about experiments that are being done at various research institutes which demonstrate how the contents of liquid containers connected by arbitrarily thin channels exhibit “action at a distance” macroscopically.

Once again, such anomalies have scientists scratching their heads for explanations; that is, scientists who cling to the never-proven pre-assumed dogma of objective materialism. Entanglement and macroscopic action at a distance find no home in this religion.

However, over here at “Consciousness-based Digital Reality” Central, we enjoy the simplicity of fitting such anomalies into our model of reality. 🙂

It all follows from three core ideas:

  1. That all matter is ultimately comprised of data (“it from bit” as John Wheeler would say) and that forces are simply the rules of how the complex data structures that form particles interact with each other.
  1. That consciousness, which is also organized data, interacts with the components of reality according to other rules of the overall system (this greater System being “reality”, “the universe”, God, “all that there is” or whatever you want to call it).
  1. The System evolves according to what Tom Campbell calls the “Fundamental Rule.” Similar to evolution, the system changes state and evolves in the direction of more profitable or useful states and away from less useful states.

Because of #3, our system has evolved to be efficient. As such, it would likely not be wasteful. So, when an observer observes (consciousness interacts with) a pair of particles in proximity to each other, the system sets their states (collapsing the wave function) and the rules of their behavior (a finite state machine) to be coherent simply out of efficiency. That is, each particle is set to the same finite state machine, and forever behaves that way no matter how far apart you take them (distance being a virtual concept in a virtual digital world).

So what prevents the same logic from applying to macroscopic collections of coherent particles? Nothing. In fact, it is inevitable. These clever scientists have learned methods to establish a coherent identical quantum state across huge quantities of particles (aka macroscopic). At the point in which the experimenter creates this state and observes it, the system establishes the state machines for all of them at once, since they are all to be in the same quantum state. And so we get room temperature superconductivity and macroscopic containers of liquid that demonstrate non-locality.

carl

Quantum Zeno Effect Solved

Lurking amidst the mass chaos of information that exists in our reality is a little gem of a concept called the Quantum Zeno Effect.  It is partially named after ancient Greek philosopher Zeno of Elea, who dreamed up a number of paradoxes about the fluidity of motion and change.  For example, the “Arrow Paradox” explores the idea that if you break down time into “instants” of zero duration, motion cannot be observed.  Thus, since time is composed of a set of instants, motion doesn’t truly exist.  We might consider Zeno to have been far ahead of his time as he appeared to be thinking about discrete systems and challenging the continuity of space and time a couple thousand years before Alan Turing resurrected the idea in relation to quantum mechanics: “It is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, one second, tends to one as N tends to infinity; that is, that continual observations will prevent motion …”.  The term “Quantum Zeno Effect” was first used by physicists George Sudarshan and Baidyanath Misra in 1977 to describe just such a system – one that does not change state because it is continuously observed.

The challenge with this theory has been in devising experiments that can verify or falsify it.  However, technology has caught up to philosophy and, over the last 25 years, a number of experiments have been performed which seem to validate the effect.  In 2001, for example, physicist Mark Raizen and a team at the University of Texas showed that the effect is indeed real and the transition of states in a system can be either slowed down or sped up simply by taking measurements of the system.

I have enjoyed making a hobby of fully explaining quantum mechanics anomalies with the programmed reality theory.   Admittedly, I don’t always fully grasp some of the deep complexities and nuances of the issues that I am tackling, due partly to the fact that I have a full time job that has naught to do with this stuff, and partly to the fact that my math skills are a bit rusty, but thus far, it doesn’t seem to make a difference.  The more I dig in to each issue, the more I find things that simply support the idea that we live in a digital (and programmed) reality.

The quantum Zeno effect might not be observed in every case.  It only works for non-memoryless processes.  Exponential decay, for instance, is an example of a memoryless system.  Frequent observation of a particle undergoing radioactive decay would not affect the result.  [As an aside, I find it very interesting that a “memoryless system” invokes the idea of a programmatic construct.  Perhaps with good reason…]

A system with memory, or “state”, however, is, in theory, subject to the quantum Zeno effect.  It will manifest itself by appearing to reset the experiment clock every time an observation is made of the state of the system.  The system under test will have a characteristic set of changes that vary over time.  In the case of the University of Texas experiment, trapped ions tended to remain in their initial state for a brief interval or so before beginning to change state via quantum tunneling, according to some probability function.  For the sake of developing a clear illustration, let’s imagine a process whereby a particle remains in its initial quantum state (let’s call it State A) for 2 seconds before probabilistically decaying to its final state (B) according to a linear function over the next second.  Figure A shows the probability of finding the particle in State A as a function of time.  For the first 2 seconds, of course, it has a 0% probability of changing state, and between 2 and 3 seconds it has an equal probability of moving to state B at any point in time.  A system with this behavior, left on its own and measured at any point after 3 seconds, will be in State B.

probability

What happens, however, when you make a measurement of that system, to check and see if it changed state, at t=1 second?  Per the quantum Zeno effect, the experiment clock will effectively be reset and now the system will stay in State A from t=1 to t=3 and then move to state B at some point between t=3 and t=4.  If you make another measurement of the system at t=1, the clock will again reset, delaying the behavior by another second.  In fact, if you continue to measure the state of the system every second, it will never change state.  Note that this has absolutely nothing to do with the physical impact of the measurement itself; a 100% non-intrusive observation will have exactly the same result.

Also note that, it isn’t that the clock doesn’t reset for a memoryless system, but rather, that it doesn’t matter because you cannot observe any difference.  One may argue that if you make observations at the Planck frequency (one per jiffy), even a memoryless sytem might never change state.  This actually approaches the true nature of Zeno’s arguments, but that is a topic for another essay, one that is much more philosophical than falsifiable.  In fact, “Quantum Zeno Effect” is a misnomer.  The non-memoryless system described above really has little to do with the ad infinitum inspection of Zeno’s paradoxes, but we are stuck with the name.  And I digress.

So why would this happen?

It appears to be related in some way to the observer effect and to entanglement:

  • Observer Effect – Once observed, the state of a system changes.
  • Entanglement – Once observed, the states of multiple particles (or, rather, the state of a system of multiple particles) are forever connected.
  • Quantum Zeno – Once observed, the state of a system is reset.

What is common to all three of these apparent quantum anomalies is the coupling of the act of observation with the concept of a state.  For the purposes of this discussion, it will be useful to invoke the computational concept of a finite state machine, which is a system that changes state according to a set of logic rules and some input criteria.

I have explained the Observer effect and Entanglement as logical necessities of an efficient programmed reality system.  What about Quantum Zeno?  Why would it not be just as efficient to start the clock on a process and let it run, independent of observation?

A clue to the answer is that the act of observation appears to create something.

In the Observer effect, it creates the collapse of the probability wave functions and the establishment of definitive properties of certain aspects of the system under observation (e.g. position).  This is not so much a matter of efficiency as it is of necessity, because without probability, free will doesn’t exist and without free will, we can’t learn, and if the purpose of our system is to grow and evolve, then by necessity, observation must collapse probability.

In Entanglement, the act of observation may create the initiation of a state machine, which subsequently determines the behavior of the particles under test.  Those particles are just data, as I have shown, and the data elements are part of the same variable space of the state machine.  They both get updated simultaneously, regardless of the “virtual” distance between them.

So, in Quantum Zeno, the system under test is in probability space.  The act of observation “collapses” this initial probability function and kicks off the mathematical process by which futures states are determined based on the programmed probability function.  But that is now a second level of probability function; call it probability function 2.  Observing this system a second time now must collapse the probability wave function 2.  But to do so means that the system would now have to calculate a modified probability function 3 going forward – one that takes into account the fact that some aspect of the state machine has already been determined (e.g. the system has or hasn’t started its decay).  For non-memoryless systems, this could be an arbitrarily complex function (3) since it may take a different shape for every time at which the observation occurs.  A third measurement complicates the function even further because even more states are ruled out.

On the other hand, it would be more efficient to simply reset the probability function each time an observation is made, due to the efficiency of the reality system.

The only drawback to this algorithm is the fact that smart scientists are starting to notice these little anomalies, although the assumption here is that the reality system “cares.”  It may not.  Or perhaps that is why most natural processes are exponential, or memoryless – it is a further efficiency of the system.  Man-made experiments, however, don’t follow the natural process and may be designed to be arbitrarily complex, which ironically serves to give us this tiny little glimpse into the true nature of reality.

What we are doing here is inferring deep truths about our reality that are in fundamental conflict with the standard materialist view.  This will be happening more and more as time goes forward and physicists and philosophers will soon have no choice but to consider programmed reality as their ToE.

matrix-stoppingreality

Flexi Matter

Earlier this year, a team of scientists at the Max Planck Institute of Quantum Optics, led by Randolf Pohl, made a highly accurate calculation of the diameter of a proton and, at .841 fm, it turned out to be 4% less than previously determined (.877 fm).  Trouble is, the previous measurements were also highly accurate.  The significant difference between the two types of measurement was the choice of interaction particle: in the traditional case, electrons, and in Pohl’s case, muons.

Figures have been checked and rechecked and both types of measurements are solid.  All sorts of crazy explanations have been offered up for the discrepancy, but one thing seems certain: we they don’t really understand matter.

Ancient Greeks thought that atoms were indivisible (hence, the name), at least until Rutherford showed otherwise in the early 1900s.  Ancient 20th-century scientists thought that protons were indivisible, at least until Gell-Mann showed otherwise in the 1960s.

So why would it be such a surprise that the diameter of a proton varies with the type of lepton cloud that surrounds and passes through it?  Maybe the proton is flexible, like a sponge, and a muon, at 200 times the weight of an electron, exerts a much higher contractive force on it – gravity, strong nuclear, Jedi, or what have you.  Just make the measurements and modify your theory, guys.  You’ll be .000001% closer to the truth, enough to warrant an even bigger publicly funded particle accelerator.

If particle sizes and masses aren’t invariant, who is to say that they don’t change over time.  Cosmologist Christof Wetterich of the University of Heidelberg thinks this might be possible.  In fact, says Wetterich, if particles are slowly increasing in size, the universe may not be expanding after all.  His recent paper suggests that spectral red shift, Hubble’s famous discovery at Mount Wilson, that led the most widely accepted theory of the universe – the big bang, may actually be due to changing particle sizes over time.  So far, no one has been able to shoot a hole in his theory.

Oops.  “Remember what we said about the big bang being a FACT?  Never mind.”

Flexi-particles.  Now there is both evidence and major philosophical repercussions.

And still, The Universe – Solved! predicts there is no stuff.

The ultimate in flexibility is pure data.

data200

Grand Unified Humanity Theory

OK, maybe this post is going to be a little silly – apologies in advance.  I’m in that kind of mood.

Physicists recently created a fascinating concoction – a Bose-Einstein condensate (BEC) that was stable at a temperature 50% higher than critical.  Check out this phys.org article with the deets.  In this bizarre state of matter, all particles act in unison, entangled, as if they were collectively a single particle.  Back in Einstein’s day, BECs were envisioned to be composed of bosons.  Later, theory predicted and experiments demonstrated fermions, and ultimately, atoms.

bose185A comparison is made to an analogous process of getting highly purified water to exist at temperatures above boiling point.  It seems that phase transitions of various types can be pushed beyond their normal critical point if the underlying material is “special” in some way – pure, balanced, coherent.

Superfluids.  Laser light.

It reminds me of the continuous advances in achieving superlative or “perfect” conditions, like superconductivity (zero resistance) at temperatures closer and closer to room.  I then think of a characteristic that new agers ascribe to physical matter – “vibrational levels.”

Always connecting dots, sometimes finding connections that shouldn’t exist.

Given the trend of raising purity, alignment, and coherence in conditions closer and closer to “normal” transitions and scales, might we someday see entangled complex molecules, like proteins?  BECs of DNA strands?

Why stop there?  Could I eventually be my own BEC?  A completely coherent vibrationally-aligned entity?  Cool.  I’ll bet I would be transparent and could walk through doors.

And what if science could figure out how to create a BEC out of all living things?  Nirvana.  Reconnecting with the cosmic consciousness.

Grand Unified Humanity Theory.

Einstein Would Have Loved Programmed Reality

Aren’t we all Albert Einstein fans, in one way or another?  If it isn’t because of his 20th Century revolution in physics (relativity), or his Nobel Prize that led to that other 20th Century revolution (quantum mechanics), or his endless Twainsian witticisms, it’s his underachiever-turned-genius story, or maybe even that crazy head of hair.  For me, it’s his regular-guy sense of humor:

“The hardest thing in the world to understand is the income tax.”

and…

“Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. THAT’S relativity.”

Albert Einstein on a bicycle in Niels Bohr's garden

But, the more I read about Albert and learn about his views on the nature of reality, the more affinity I have with his way of thinking.  He died in 1955, hardly deep enough into the digital age to have had a chance to consider the implications of computing, AI, consciousness, and virtual reality.  Were he alive today, I suspect that he would be a fan of digital physics, digital philosophy, simulism, programmed reality – whatever you want to call it.  Consider these quotes and see if you agree:

“Reality is merely an illusion, albeit a very persistent one.”

“I wished to show that space-time isn’t necessarily something to which one can ascribe a separate existence, independently of the actual objects of physical reality. Physical objects are not in space, but these object are spatially extended. In this way the concept of ’empty space’ loses its meaning.”

As far as the laws of mathematics refer to reality, they are uncertain; and as far as they are certain, they do not refer to reality.”

“A human being is part of a whole, called by us the ‘Universe’ —a part limited in time and space. He experiences himself, his thoughts, and feelings, as something separated from the rest—a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest us. Our task must be to free ourselves from this prison by widening our circles of compassion to embrace all living creatures and the whole of nature in its beauty.”

“Space does not have an independent existence.”

“Hence it is clear that the space of physics is not, in the last analysis, anything given in nature or independent of human thought.  It is a function of our conceptual scheme [mind].”

 “Every one who is seriously involved in the pursuit of science becomes convinced that a spirit is manifest in the laws of the Universe-a spirit vastly superior to that of man, and one in the face of which we with our modest powers must feel humble.”

I can only imagine the insights that Albert would have had into the mysteries of the universe, had he lived well into the computer age.  It would have given him an entirely different perspective on that conundrum that puzzled him throughout his later life – the relationship of consciousness to reality.  And he might have even tossed out the Unified Field Theory that he was forever chasing and settled in on something that looked a little more digital.

 

Bizarro Physics

All sorts of oddities emerge from equations that we have developed to describe reality.  What is surprising is that rather than being simply mathematical artifacts, they actually show up in our physical world.

Perhaps the first such bizarro (see DC Comics) entity was antimatter; matter with an opposite charge and spin.  A mathematical solution to Paul Dirac’s relativistic version of Schrödinger’s equation (it makes my head hurt just looking at it), antimatter was discovered 4 years after Dirac predicted it.

One of last year’s surprises was the negative frequencies that are solutions to Maxwell’s equations and have been shown to reveal themselves in components of light.

And, earlier this month, German physicists announced the ability to create a temperature below absolute zero.

So when we were told in physics class to throw out those “negative” solutions to equations because they were in the imaginary domain, and therefore had no basis in reality…uh, not so fast.

What I find interesting about these discoveries is the implications for the bigger picture.  If our reality were what most of us think it is – 3 dimensions of space, with matter and energy following the rules set forth by the “real” solutions to the equations of physics – one might say that reality trumps the math; that solutions to equations only make sense in the context of describing reality.

However, it appears to be the other way around – math trumps reality.  Solutions to equations previously thought to be in the “imaginary domain” are now being shown to manifest in our reality.

This is one more category of evidence that underlying our apparent reality are data and rules.  The data and rules don’t manifest from the reality; they create the reality.

Bizarro185 antimatter185

The Digital Reality Bandwagon

I tend to think that reality is just data.  That the fundamental building blocks of matter and space will ultimately be shown to be bits, nothing more.  Those who have read my book, follow this blog, or my Twitter feed, realize that this has been a cornerstone of my writing since 2006.

Not that I was the first to think of any of this.  Near as I can tell, Phillip K. Dick may deserve that credit, having said “We are living in a computer programmed reality” in 1977, although I am sure that someone can find some Shakespearean reference to digital physics (“O proud software, that simulates in wanton swirl”).

Still, a mere six years ago, it was a lonely space to be in.  The few digital reality luminaries at that time included:

But since then…

– MIT Engineering Professor Seth Lloyd published “Programming the Universe” in 2006, asserting that the universe is a massive quantum computer running a cosmic program.

– Nuclear physicist Thomas Campbell published his excellent unifying theory “My Big TOE” in 2007.

– Brian Whitworth, PhD. authored a paper containing evidence that our reality is programmed: “The emergence of the physical world from information processing”, Quantum Biosystems 2010, 2 (1) 221-249  http://arxiv.org/abs/0801.0337

– University of Maryland physicist, Jim Gates, discovered error-correction codes in the laws of physics. See “Symbols of Power”, Physics World, Vol. 23, No 6, June 2010.

– Fermilab astrophysicist, Craig Hogan, speculated that space is quantized.  This was based on results from GEO600 measurements in 2010.  See: http://www.wired.com/wiredscience/2010/10/holometer-universe-resolution/.  A holometer experiment is being constructed to test: http://holometer.fnal.gov/

– Rich Terrile, director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory, hypothesized that we are living in a simulated reality. http://www.vice.com/read/whoa-dude-are-we-inside-a-computer-right-now-0000329-v19n9

– Physicists Leonard Susskind ad Gerard t’Hooft, developed the holographic black hole physics theory (our universe is digitally encoded on the surface of a black hole).

Even mainstream media outlets are dipping a toe into the water to see what kinds of reactions they get, such as this recent article in New Scientist Magazine: http://www.newscientist.com/article/mg21528840.800-reality-is-everything-made-of-numbers.html

So, today, I feel like I am in really great company and it is fun to watch all of the futurists, philosophers, and scientists jump on the new digital reality bandwagon.  The plus side will include the infusion of new ideas and the resulting synthesis of theory, as well as pushing the boundaries of experimental validation.  The down side will be all of the so-called experts jockeying for position.  In any case, it promises to be a wild ride, one that should last the twenty or so years it will take to create the first full-immersion reality simulation.  Can’t wait.