Quantum Zeno Effect Solved

Lurking amidst the mass chaos of information that exists in our reality is a little gem of a concept called the Quantum Zeno Effect.  It is partially named after ancient Greek philosopher Zeno of Elea, who dreamed up a number of paradoxes about the fluidity of motion and change.  For example, the “Arrow Paradox” explores the idea that if you break down time into “instants” of zero duration, motion cannot be observed.  Thus, since time is composed of a set of instants, motion doesn’t truly exist.  We might consider Zeno to have been far ahead of his time as he appeared to be thinking about discrete systems and challenging the continuity of space and time a couple thousand years before Alan Turing resurrected the idea in relation to quantum mechanics: “It is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, one second, tends to one as N tends to infinity; that is, that continual observations will prevent motion …”.  The term “Quantum Zeno Effect” was first used by physicists George Sudarshan and Baidyanath Misra in 1977 to describe just such a system – one that does not change state because it is continuously observed.

The challenge with this theory has been in devising experiments that can verify or falsify it.  However, technology has caught up to philosophy and, over the last 25 years, a number of experiments have been performed which seem to validate the effect.  In 2001, for example, physicist Mark Raizen and a team at the University of Texas showed that the effect is indeed real and the transition of states in a system can be either slowed down or sped up simply by taking measurements of the system.

I have enjoyed making a hobby of fully explaining quantum mechanics anomalies with the programmed reality theory.   Admittedly, I don’t always fully grasp some of the deep complexities and nuances of the issues that I am tackling, due partly to the fact that I have a full time job that has naught to do with this stuff, and partly to the fact that my math skills are a bit rusty, but thus far, it doesn’t seem to make a difference.  The more I dig in to each issue, the more I find things that simply support the idea that we live in a digital (and programmed) reality.

The quantum Zeno effect might not be observed in every case.  It only works for non-memoryless processes.  Exponential decay, for instance, is an example of a memoryless system.  Frequent observation of a particle undergoing radioactive decay would not affect the result.  [As an aside, I find it very interesting that a “memoryless system” invokes the idea of a programmatic construct.  Perhaps with good reason…]

A system with memory, or “state”, however, is, in theory, subject to the quantum Zeno effect.  It will manifest itself by appearing to reset the experiment clock every time an observation is made of the state of the system.  The system under test will have a characteristic set of changes that vary over time.  In the case of the University of Texas experiment, trapped ions tended to remain in their initial state for a brief interval or so before beginning to change state via quantum tunneling, according to some probability function.  For the sake of developing a clear illustration, let’s imagine a process whereby a particle remains in its initial quantum state (let’s call it State A) for 2 seconds before probabilistically decaying to its final state (B) according to a linear function over the next second.  Figure A shows the probability of finding the particle in State A as a function of time.  For the first 2 seconds, of course, it has a 0% probability of changing state, and between 2 and 3 seconds it has an equal probability of moving to state B at any point in time.  A system with this behavior, left on its own and measured at any point after 3 seconds, will be in State B.

probability

What happens, however, when you make a measurement of that system, to check and see if it changed state, at t=1 second?  Per the quantum Zeno effect, the experiment clock will effectively be reset and now the system will stay in State A from t=1 to t=3 and then move to state B at some point between t=3 and t=4.  If you make another measurement of the system at t=1, the clock will again reset, delaying the behavior by another second.  In fact, if you continue to measure the state of the system every second, it will never change state.  Note that this has absolutely nothing to do with the physical impact of the measurement itself; a 100% non-intrusive observation will have exactly the same result.

Also note that, it isn’t that the clock doesn’t reset for a memoryless system, but rather, that it doesn’t matter because you cannot observe any difference.  One may argue that if you make observations at the Planck frequency (one per jiffy), even a memoryless sytem might never change state.  This actually approaches the true nature of Zeno’s arguments, but that is a topic for another essay, one that is much more philosophical than falsifiable.  In fact, “Quantum Zeno Effect” is a misnomer.  The non-memoryless system described above really has little to do with the ad infinitum inspection of Zeno’s paradoxes, but we are stuck with the name.  And I digress.

So why would this happen?

It appears to be related in some way to the observer effect and to entanglement:

  • Observer Effect – Once observed, the state of a system changes.
  • Entanglement – Once observed, the states of multiple particles (or, rather, the state of a system of multiple particles) are forever connected.
  • Quantum Zeno – Once observed, the state of a system is reset.

What is common to all three of these apparent quantum anomalies is the coupling of the act of observation with the concept of a state.  For the purposes of this discussion, it will be useful to invoke the computational concept of a finite state machine, which is a system that changes state according to a set of logic rules and some input criteria.

I have explained the Observer effect and Entanglement as logical necessities of an efficient programmed reality system.  What about Quantum Zeno?  Why would it not be just as efficient to start the clock on a process and let it run, independent of observation?

A clue to the answer is that the act of observation appears to create something.

In the Observer effect, it creates the collapse of the probability wave functions and the establishment of definitive properties of certain aspects of the system under observation (e.g. position).  This is not so much a matter of efficiency as it is of necessity, because without probability, free will doesn’t exist and without free will, we can’t learn, and if the purpose of our system is to grow and evolve, then by necessity, observation must collapse probability.

In Entanglement, the act of observation may create the initiation of a state machine, which subsequently determines the behavior of the particles under test.  Those particles are just data, as I have shown, and the data elements are part of the same variable space of the state machine.  They both get updated simultaneously, regardless of the “virtual” distance between them.

So, in Quantum Zeno, the system under test is in probability space.  The act of observation “collapses” this initial probability function and kicks off the mathematical process by which futures states are determined based on the programmed probability function.  But that is now a second level of probability function; call it probability function 2.  Observing this system a second time now must collapse the probability wave function 2.  But to do so means that the system would now have to calculate a modified probability function 3 going forward – one that takes into account the fact that some aspect of the state machine has already been determined (e.g. the system has or hasn’t started its decay).  For non-memoryless systems, this could be an arbitrarily complex function (3) since it may take a different shape for every time at which the observation occurs.  A third measurement complicates the function even further because even more states are ruled out.

On the other hand, it would be more efficient to simply reset the probability function each time an observation is made, due to the efficiency of the reality system.

The only drawback to this algorithm is the fact that smart scientists are starting to notice these little anomalies, although the assumption here is that the reality system “cares.”  It may not.  Or perhaps that is why most natural processes are exponential, or memoryless – it is a further efficiency of the system.  Man-made experiments, however, don’t follow the natural process and may be designed to be arbitrarily complex, which ironically serves to give us this tiny little glimpse into the true nature of reality.

What we are doing here is inferring deep truths about our reality that are in fundamental conflict with the standard materialist view.  This will be happening more and more as time goes forward and physicists and philosophers will soon have no choice but to consider programmed reality as their ToE.

matrix-stoppingreality

Ever Expanding Horizons

Tribal Era

tribalera200Imagine the human world tens of thousands of years ago.  A tribal community lived together, farming, hunting, trading, and taking care of each other.  There was plenty of land to support the community and as long as there were no strong forces driving them to move, they stayed where they were, content.  As far as they knew, “all that there is” was just that community and the land that was required to sustain it.  We might call this the Tribal Era.

Continental Era

continentalera200But, at some point, for whatever reason – drought, restlessness, desire for a change of scenery – another tribe moved into the first tribe’s territory.  For the first time, that tribe realized that the world was bigger than their little community.  In fact, upon a little further exploration, they realized that the boundaries of “all that there is” just expanded to the continent on which they lived, and there was a plethora of tribes in this new greater community.  The horizon of their reality just reached a new boundary and their community was now a thousand fold larger than before.

Planetary Era

planetaryera200According to researchers, the first evidence of cross-oceanic exploration was about 9000 years ago.  Now, suddenly, this human community may have been subject to an invasion of an entirely different race of people with different languages coming from a place that was previously thought to not exist.  Again, the horizon expands and “all that there is” reaches a new level, one that consists of the entire planet.

Solar Era

The Ancient Greek philosophers and astronomers recognized the existence of other solarera200planets.  Gods were thought to have come from the sun or elsewhere in the heavens, which consisted of a celestial sphere that wasn’t too far out away from the surface of our planet.

Imaginations ran wild as horizons expanded once again.

Galactic Era

galacticera200In 1610, Galileo looked through his telescope and suddenly humanity’s horizon expanded by another level.  Not only did the other planets resemble ours, but it was clear that the sun was the center of the known universe, stars were extremely far away, there were strange distant nebulae that were more than nearby clouds of debris, and the Milky Way consisted of distant stars.  In other worlds, “all that there is” became our galaxy.

Universal Era

universalera200A few centuries later, in 1922, it was time to expand our reality horizon once again, as the 100-inch telescope at Mount Wilson revealed that some of those fuzzy nebulae were actually other galaxies.  The concept of deep space and “Universe” was born and new measurement techniques courtesy of Edwin Hubble showed that “all that there is” was actually billions of times more than previously thought.

Multiversal Era

multiversalera200These expansions of “all that there is” are happening so rapidly now that we are still debating the details about one worldview, while exploring the next, and being introduced to yet another.  Throughout the latter half of the 20th century, a variety of ideas were put forth that expanded our reality horizon to the concept of many (some said infinite) parallel universes.  The standard inflationary big bang theory allowed for multiple Hubble volumes of universes that are theoretically within our same physical space, but unobservable due to the limitations of the speed of light.  Bubble universes, MWI, and many other theories exist but lack any evidence.  In 2003, Max Tegmark framed all of these nicely in his concept of 4 levels of Multiverse.

I sense one of those feelings of acceleration with the respect to the entire concept of expanding horizons, as if our understanding of “all that there is” is growing exponentially.  I was curious to see how exponential it actually was, so I took the liberty of plotting each discrete step in our evolution of awareness of “all that there is” on a logarithmic plot and guess what?

Almost perfectly exponential! (see below)

horizons

Dramatically, the trend points to a new expansion of our horizons in the past 10 years or so.  Could there really be a something beyond a multiverse of infinitely parallel universes?  And has such a concept recently been put forth?

Indeed there is and it has.  And, strangely, it isn’t even something new.  For millennia, the spiritual side of humanity has explored non-physical realities; Shamanism, Heaven, Nirvana, Mystical Experiences, Astral Travel.  Our Western scientific mentality that “nothing can exist that cannot be consistently and reliably reproduced in a lab” has prevented many of us from accepting these notions.  However, there is a new school of thought that is based on logic, scientific studies, and real data (if your mind is open), as well as personal knowledge and experience.  Call it digital physics (Fredkin), digital philosophy, simulation theory (Bostrom), programmed reality (yours truly), or My Big TOE (Campbell).  Tom Campbell and others have taken the step of incorporating into this philosophy the idea of non-material realms.  Which is, in fact, a new expansion of “all that there is.”  While I don’t particularly like the term “dimensional”, I’m not sure that we have a better descriptor.

Interdimensional Era

interdiensionalera200Or maybe we should just call it “All That There Is.”

At least until a few years from now.

Alien Hunters Still Thinking Inside The Box (or Dyson Sphere)

As those who are familiar with my writing already know, I have long thought that the SETI program was highly illogical, for a number of reason, some of which are outlined here and here.

To summarize, it is the height of anthropomorphic and unimaginative thinking to assume that ET will evolve just like we did and develop radio technology at all.  Even if they did, and followed a technology evolution similar to our own, the era of high-powered radio broadcasts should be insignificant in relation to the duration of their evolutionary history.  In our own case even, that era is almost over, as we are moving to highly networked and low-powered data communication (e.g. Wi-Fi), which is barely detectable a few blocks away, let alone light years.  And even if we happened to overlap a 100-year radio broadcast era of a civilization in our galactic neighborhood, they would still never hear us, and vice versa, because the signal level required to reliably communicate around the world becomes lost in the noise of the cosmic microwave background radiation before it even leaves the solar system.

So, no, SETI is not the way to uncover extraterrestrial intelligences.

Dyson Sphere

Some astronomers are getting a bit more creative and are beginning to explore some different ways of detecting ET.  One such technique hinges on the concept of a Dyson Sphere.  Physicist Freeman Dyson postulated the idea in 1960, theorizing that advanced civilizations will continuously increase their demand for energy, to the point where they need to capture all of the energy of the star that they orbit.  A possible mechanism for doing so could be a network of satellites surrounding the solar system and collecting all of the energy of the star.  Theoretically, a signature of a distant Dyson Sphere would be a region of space emitting no visible light but generating high levels of infrared radiation as waste.  Some astronomers have mapped the sky over the years, searching for such signatures, but to no avail.

Today, a team at Penn State is resuming the search via data from infrared observatories WISE and Spitzer.  Another group from Princeton has also joined in the search, but are using a different technique by searching for dimming patterns in the data.

I applaud these scientists who are expanding the experimental boundaries a bit.  But I doubt that Dyson Spheres are the answer.  There are at least two flaws with this idea.

First, the assumption that we will continuously need more energy is false.  Part of the reason for this is the fact that once a nation has achieved a particular level of industrialization and technology, there is little to drive further demand.  The figure below, taken from The Atlantic article “A Short History of 200 Years of Global Energy Use” demonstrates this clearly.

per-capita-energy-consumption300

In addition, technological advances make it cheaper to obtain the same general benefit over time.  For example, in terms of computing, performing capacity per watt has increased by a factor of over one trillion in the past 50 years.  Dyson was unaware of this trend because Moore’s Law hadn’t been postulated until 1965.  Even in the highly corrupt oil industry, with their collusion, lobbying, and artificial scarcity, performance per gallon of gas has steadily increased over the years.

The second flaw with the Dyson Sphere argument is the more interesting one – the assumptions around how humans will evolve.  I am sure that in the booming 1960s, it seemed logical that we would be driven by the need to consume more and more, controlling more and more powerful tools as time went on.  But, all evidence actually points to the contrary.

We are in the beginning stages of a new facet of evolution as a species.  Not a physical one, but a consciousness-oriented one.  Quantum Mechanics has shown us that objective reality doesn’t exist.  Scientists are so frightened by the implications of this that they are for the most part in complete denial.  But the construct of reality is looking more and more like it is simply data.  And the evidence is overwhelming that consciousness is controlling the body and not emerging from it.  As individuals are beginning to understand this, they are beginning to recognize that they are not trapped by their bodies, nor this apparent physical reality.

Think about this from the perspective of the evolution of humanity.  If this trend continues, why will we even need the body?

Robert Monroe experienced a potential future (1000 years hence), which may be very much in line with the mega-trends that I have been discussing on theuniversesolved.com: “No sound, it was NVC [non-vocal communication]! We made it! Humans did it! We made the quantum jump from monkey chatter and all it implied.” (“Far Journeys“)

earthWe may continue to use the (virtual) physical reality as a “learning lab”, but since we won’t really need it, neither will we need the full energy of the virtual star.  And we can let virtual earth get back to the beautiful virtual place it once was.

THIS is why astronomers are not finding any sign of intelligent life in outer space, no matter what tools they use.  A sufficiently advanced civilization does not communicate using monkey chatter, nor any technological carrier like radio waves.

They use consciousness.

So will we, some day.

Grand Unified Humanity Theory

OK, maybe this post is going to be a little silly – apologies in advance.  I’m in that kind of mood.

Physicists recently created a fascinating concoction – a Bose-Einstein condensate (BEC) that was stable at a temperature 50% higher than critical.  Check out this phys.org article with the deets.  In this bizarre state of matter, all particles act in unison, entangled, as if they were collectively a single particle.  Back in Einstein’s day, BECs were envisioned to be composed of bosons.  Later, theory predicted and experiments demonstrated fermions, and ultimately, atoms.

bose185A comparison is made to an analogous process of getting highly purified water to exist at temperatures above boiling point.  It seems that phase transitions of various types can be pushed beyond their normal critical point if the underlying material is “special” in some way – pure, balanced, coherent.

Superfluids.  Laser light.

It reminds me of the continuous advances in achieving superlative or “perfect” conditions, like superconductivity (zero resistance) at temperatures closer and closer to room.  I then think of a characteristic that new agers ascribe to physical matter – “vibrational levels.”

Always connecting dots, sometimes finding connections that shouldn’t exist.

Given the trend of raising purity, alignment, and coherence in conditions closer and closer to “normal” transitions and scales, might we someday see entangled complex molecules, like proteins?  BECs of DNA strands?

Why stop there?  Could I eventually be my own BEC?  A completely coherent vibrationally-aligned entity?  Cool.  I’ll bet I would be transparent and could walk through doors.

And what if science could figure out how to create a BEC out of all living things?  Nirvana.  Reconnecting with the cosmic consciousness.

Grand Unified Humanity Theory.

The Observer Effect and Entanglement are Practically Requirements of Programmed Reality

Programmed Reality has been an incredibly successful concept in terms of explaining the paradoxes and anomalies of Quantum Mechanics, including non-Reality, non-Locality, the Observer Effect, Entanglement, and even the Retrocausality of John Wheeler’s Delayed Choice Quantum Eraser experiment.

I came up with those explanations by thinking about how Programmed Reality could explain such curiosities.

But I thought it might be interesting to view the problem in the reverse manner.  If one were to design a universe-simulating Program, what kinds of curiosities might result from an efficient design?  (Note: I fully realize that any entity advanced enough to simulate the universe probably has a computational engine that is far more advanced that we can even imagine; most definitely not of the von-Neumann variety.  Yet, we can only work with what we know, right?)

So, if I were to create such a thing, for instance, I would probably model data in the following manner:

For any space unobserved by a conscious entity, there is no sense in creating the reality for that space in advance.  It would unnecessarily consume too many resources.

For example, consider the cup of coffee on your desk.  Is it really necessary to model every single subatomic particle in the cup of coffee in order to interact with it in the way that we do?  Of course not.  The total amount of information contained in that cup of coffee necessary to stimulate our senses in the way that it does (generate the smell that it does; taste the way it does; feel the way it does as we drink it; swish around in the cup the way that it does; have the little nuances, like tiny bubbles, that make it look real; have the properties of cooling at the right rate to make sense, etc.) might be 10MB or so.  Yet, the total potential information content in a cup of coffee is 100,000,000,000 MB, so there is a ratio of perhaps 100 trillion in compression that can be applied to an ordinary object.

But once you decide to isolate an atom in that cup of coffee and observe it, the Program would then have to establish a definitive position for that atom, effectively resulting in the collapse of the wave function, or decoherence.  Moreover, the complete behavior of the atom, at that point, might be forever under control of the program.  After all, why delete the model once observed, in the event (probably fairly likely) that it will be observed again at some point in the future.  Thus, the atom would have to be described by a finite state machine.  It’s behavior would be decided by randomly picking values of the parameters that drive that behavior, such as atomic decay.  In other words, we have created a little mini finite state machine.

So, the process of “zooming in” on reality in the Program would have to result in exactly the type of behavior observed by quantum physicists.  In other words, in order to be efficient, resource-wise, the Program decoheres only the space and matter that it needs to.

Let’s say we zoom in on two particles at the same time; two that are in close proximity to each other.  Both would have to be decohered by the Program.  The decoherence would result in the creation of two mini finite state machines.  Using the same random number seed for both will cause the state machines to forever behave in an identical manner.

No matter how far apart you take the particles.  i.e…

Entanglement!

So, Observer Effect and Entanglement might both be necessary consequences of an efficient Programmed Reality algorithm.

 

coffee185 entanglement185

Things We Can Never Comprehend

Have you ever wondered what we don’t know?  Or, to put it another way, how many mysteries of the universe are still to be discovered?

To take this thought a step further, have you ever considered that there may be things that we CAN’T understand, no matter how hard we try?

This idea may be shocking to some, especially to those scientists who believe that we are nearing the “Grand Unified Theory”, or “Theory of Everything” that will provide a simple and elegant solution to all forces, particles, and concepts in science.  Throughout history, the brightest of minds have been predicting the end of scientific inquiry.  In 1871, James Clerk Maxwell lamented the sentiment of the day which he represented by the statement “in a few years, all great physical constants will have been approximately estimated, and that the only occupation which will be left to men of science will be to carry these measurements to another place of decimals.”

Yet, why does it always seem like the closer we get to the answers, the more monkey wrenches get thrown in the way?  In today’s world, these include strange particles that don’t fit the model.  And dark matter.  And unusual gravitational aberrations in distant galaxies.

Perhaps we need a dose of humility.  Perhaps the universe, or multiverse, or whatever term is being used these days to denote “everything that is out there” is just too far beyond our intellectual capacity.  Before you call me out on this heretical thought, consider…

The UK’s Astronomer Royal Sir Martin Rees points out that “a chimpanzee can’t understand quantum mechanics.”  Despite the fact that Richard Feynman claimed that nobody understands quantum mechanics, as Michael Brooks points out in his recent article “The limits of knowledge: Things we’ll never understand”, no matter how hard they might try, the comprehension of something like Quantum Mechanics is simply beyond the capacity of certain species of animals.  Faced with this realization and the fact that anthropologists estimate that the most recent common ancestor of both humans and chimps (aka CHLCA) was about 6 million years ago, we can draw a startling conclusion:

There are certainly things about our universe and reality that are completely beyond our ability to comprehend!

My reasoning is as follows. Chimps are certainly at least more intelligent than the CHLCA; otherwise evolution would be working in reverse.  As an upper bound of intelligence, let’s say that CHLCA and chimps are equivalent.  Then, CHLCA was certainly not able to comprehend QM (nor relativity, nor even Newtonian physics), but upon evolving into humans over 8 million years, our new species was able to comprehend these things.  8 million years represents 0.06% of the entire age of the universe (according to what we think we know).  That means that for 99.94% of the total time that the universe and life was evolving up to the current point in time, the most advanced creature on earth was incapable of understand the most rudimentary concepts about the workings of reality and the universe.  And yet, are we to suppose that in the last 0.06% of the time, a species has evolved that can understand everything?  I’m sure you see how unlikely that is.

What if our universe was intelligently designed?  The same argument would probably hold.  For some entity to be capable of creating a universe that continues to baffle us no matter how much we think we understand, that entity must be far beyond our intelligence, and therefore has utilized, in the design, concepts that we can’t hope to understand.

Our only chance for being supremely capable of understanding our world would lie in the programmed reality model.  If the creator of our simulation was us, or even an entity a little more advanced than us, it could lead us along a path of exploration and knowledge discovery that just always seems to be on slightly beyond our grasp.  Doesn’t that idea feel familiar?

chimpscratching185 humanscratching185

Quantum Mechanics Anomalies – Solved!

Scientists are endlessly scratching their heads over the paradoxes presented by quantum mechanics – duality, entanglement, the observer effect, nonlocality, non-reality.  The recent cover story in New Scientist, “Reality Gap” (or “Is quantum theory weird enough for the real world?” in the online version) observes: “Our best theory of nature has no roots in reality.”

BINGO! But then they waste this accurate insight by looking for one.

Just three days later, a new article appears: “Infinite doppelgängers may explain quantum probabilities”  Browse the website or that of other popular scientific journals and you’ll find no end of esteemed physicists taking a crack at explaining the mysteries of QM.  Doppelgängers now?  Really?  I mean no disrespect to our esteemed experts, but the answer to all of your mysteries is so simple.  Take a brave step outside of your narrow field and sign up for Computer Science 101 and Information Theory 101.  And then think outside the box, if even just for a few minutes.

Every anomaly is explained, thusly:

Duality and the Observer Effect: “Double Slit Anomaly is No Mystery to Doctor PR

Entanglement: “Quantum Entanglement – Solved (with pseudocode)”

Non-Reality: “Reality Doesn’t Exist, according to the latest research

Nonlocality: “Non-locality Explained!”

Got any more anomalies?  Send them my way! Smile

realitycheck

Double Slit Anomaly is No Mystery to Doctor PR

One of the keys to understanding our reality is found in a very unusual and anomalous experiment done over 200 years ago by Thomas Young. The philosophical debate that resulted from this experiment and its successors during the quantum era of the 20th century may hold the key to understanding everything – from bona fide scientific anomalies to cold fusion and bigfoot sightings.

If you are unfamiliar with this experiment, please watch the Dr. Quantum cartoon on the Double Slit Experiment. It provides a good explanation of two paradoxes that have puzzled scientists for many years. In summary, here is the conundrum:

1. If you fire electrons at a screen through a single slit in an otherwise impenetrable barrier, there will be a resulting pattern on the screen as you might expect – a single band of points.

2. If you fire electrons at a screen through a barrier with two slits, the pattern that will build up on the screen is not one of two bands of points, but rather an entire interference pattern, as if the electrons were actually waves instead of particles.

This is one paradox – that electrons (and all other particles) have dual personalities in that they can act like both waves and particles. Further, the personality that emerges matches the type of experiment that you are doing. If you are testing to see if the electron acts like a particle, it will. If you are testing to see if the electron acts like a wave, it will.

3. Even if the electrons are fired one at a time, eliminating the possibility of electrons interfering with each other, over time, the same pattern emerges.

4. If you put a measuring device at the slit, thereby observing which slit each electron passes through, the interference pattern disappears.

This is the more mysterious paradox – that the mere act of observation changes the result of the experiment. The implications of this are huge because they imply that our conscious actions create or modify reality.

Dr. Programmed Reality will now provide the definitive explanation that Dr. Quantum could not:

1. Electrons, along with photons, all other particles, and ultimately everything, are really nothing but information. That information describes how the electron (for example) behaves under all circumstances, what probabilities it will travel in any particular direction, and how it will reveal its presence to our senses. That information, plus the rules of reality, fully determine how it can appear sometimes like a particle and sometimes like a wave. Because it is really neither – it is JUST information that is used to give us the sensory impression of one of those personalities under various circumstances. Paradox 1 solved.

2. The great cosmic Program that appears to control our reality (see my book “The Universe – Solved!” for evidence), is also fully aware of the state of consciousness of every free-willed observer in our reality. As a result, the behavior exhibited by an electron under observation can easily be made to be a function of the observation being made. Paradox 2 solved.

If you don’t believe that, here is the piece of pseudo-code that could represent the part of The Program that controls the outcomes of such experiments (each state of each object consists of all spatial coordinates, plus time, and directional vectors):

while(time!=EndTime) {

for n=1 to AllParticlesInTheUniverse {

Object=Particle(n)
CurrentState(Object)=AcquireState(Object);
ObservationState(Object)=CollectObservationalIntent(AllObservers(Object));
NextState(Object)=CalculateNextState(CurrentState(Object), ObservationState(Object));
ApplyNextState(NextState(Object));
next n
}
}

It’s all there – full control of the outcome of any experiment based on the objects under test and the observational status of all observers.  Any known quantum mechanical paradox fully explained by 1970s-vintage pseudocode without the need for the hand waving of collapsing wave functions or zillions of parallel realities.

doctorquantum

Quantum Entanglement – Solved (with pseudocode)

I am always amazed at how such bright physicists discuss scientific anomalies, like quantum entanglement, pronounce that “that’s just the way it is” and never seriously consider an obvious answer and solution to all such anomalies – namely that perhaps our reality is under programmed control.

For the quantum entanglement anomaly, I think you will see what I mean.  Imagine that our world is like a video game.  As with existing commercial games, which use “physics engines”, the players (us) are subject to the rules of physics, as are subatomic particles.  However, suppose there is a rule in the engine that says that when two particles interact, their behavior is synchronized going forward.  Simple to program.  The pseudocode would look something like:

for all particles (i)
for all particles (j)
if distance(particle.i, particle.j) < EntanglementThreshold then
Synchronize(particle.i, particle.j)
else
end if
next j
next i

After that event, at each cycle through the main program loop, whatever one particle does, its synchronized counterparts also do.  Since the program operates outside of the artificial laws of physics, those particles can be placed anywhere in the program’s reality space and they will always stay synchronized.  Yet their motion and other interactions may be subject to the usual physics engine.  This is very easy to program, and, coupled with all of the other evidence that our reality is under programmed control (the programmer is the intelligent creator), offers a perfect explanation.  More and more scientists are considering these ideas (e.g. Craig Hogan, Brian Whitworth, Andrei Linde) although the thought center is more in the fields of philosophy, computer science, and artificial intelligence.  I wonder if the reason more physicists haven’t caught on is that they fear that such concepts might make them obsolete.

They needn’t worry.  Their jobs are still to probe the workings of the “cosmic program.”

 

entanglement

How to Walk Through a Door

I had a brainstorm the other day on how we might someday be able to walk through a door.  And I don’t mean from a metaphysical standpoint, I mean really physically walk through the door.  If you think about it, there really should be a way to make it happen.  After all, our bodies and the door are almost 100% empty space.  I would argue that Programmed Reality says it is completely empty space, but that topic will have to be for another post.

An electron, in Newtonian mechanics, can be stuck on one side of an impenetrable barrier.  In QM, however, its wave function can be partly on one side of a barrier and partly on the other side at the same time, which allows for the possibility of “tunneling,” a common effect in semiconductors.  In fact, were it not for the wave function nature of QM, transistors, and therefore cell phones, computers, satellites, and all other sorts of modern technologies would not even exist!

tunneling

Interestingly, this theory does not only apply to subatomic particles, but also to macroscopic objects like me, you, and Donald Trump’s hair.  Since our bodies are composed of particles, each of which are just wave functions, your body is simply the superposition of these zillions of wave functions, thereby creating its own “macroscopic” wave function.  Theoretically, for this reason, you have a finite probability of passing through a wooden door, much like the electron tunneling effect.  But, don’t try it.  Because, when you sum up all of your constituent particles’ wave functions, there is a mathematical tendency for the probabilities of large-scale anomalous quantum effects to be extremely small.  It is analogous to flipping pennies.  The odds that a single penny comes up heads (electron passes through the barrier) is 50-50, but the odds that 1000 pennies all come up heads (you pass through the door) is 2^^1000 (equivalent to a 1 followed by 301 zeros, an impossible to imagine large number) to 1.  And you have a helluva lot more than 1000 subatomic particles in your body.

But what if those particles in our bodies and/or the door were made to be coherent?  That is, in our penny analogy, all pennies behave the same behavior.  Impossible?  Not so fast, Einstein.  LASERs are a great example of coherence, where all photons are of the same frequency and are in phase.  Aren’t particles of matter just a different form of particle from the photons and could they be organized to be coherent as well?

Turns out that is exactly the case and it is known as Macroscopic Quantum Tunneling.  U of Illinois researchers have demonstrated such an effect with electrons (real matter) in a nanowire.  Superconductors, superfluidity, Bose–Einstein condensates are examples of properties that seem to defy conventional physics by having their constituents occupy coherent states.  Macroscopic Quantum Coherence is a predicted property, yet to be observed in the laboratory, but probably inevitable, whereby all atoms in the piece of matter observing that property are in-phase and are described by a single quantum wavefunction.  Well, that wavefunction allows for the possibility of matter being anywhere, or “tunneling” through a thin enough membrane of material.  Let’s say that, not unlike a laser, we could get all of the atoms in our bodies to be coherent.  Might it not be possible to “tunnel” through a thin membrane of coherent material?

Effectively, we would have walked through a door!

Yes, I know that all of the different atoms in our bodies might not be made to be coherent with each other.  Then again, think about radio waves of different frequencies.  In general, they can’t be in phase with each other, except at one particular point.  Fourier analysis of a waveform with a discontinuity, like a step function or a delta function, has, at the point of the discontinuity, all frequencies in phase.  Could there ultimately be a way to accomplish that with the mere several dozen atomic frequencies present in our bodies (And who cares if that stray bit of Uranium in your spleen is left behind on the other side of the door.  Would you really miss it?)  So maybe the trick is to pulse the coherence into your body just as you walk through the door.

Then there is the problem of how to get each planar sliver of your body to have the same tunneling capability sequentially.  Like, so you don’t end up with a door stuck in your chest, all Jeff Goldblum-like.  Seems to me that maybe it’s just a matter of applying continuous pulses of coherence into your body as you walk through the door.  For each planar sliver, one of the pulses will eventually make you progress to the next sliver.  Just hope the machine doesn’t break down midway through.

So, there you have it.  One, ultra high frequency multi-atomic coherence pulser.  And you’re walking through walls.

walkingthroughawall