Quantum Retrocausality Explained

A recent quantum mechanics experiment, conducted at the University of Queensland in Australia, seems to defy causal order, baffling scientists. In this post however, I’ll explain why this isn’t anomalous at all; at least, if you come to accept the Digital Consciousness Theory (DCT) of reality. It boils down to a virtually identical explanation that I gave seven years ago to Daryl Bem’s seemingly anomalous precognition studies.

DCT says that subatomic particles are controlled by finite state machines (FSMs), which are tiny components of our Reality Learning Lab (RLL, aka “reality”).  These finite state machines that control the behavior of the atoms or photons in the experiment don’t really come into existence until the measurement is made, which effectively means that the atom or photon doesn’t really exist until it needs to. In RLL, the portion of the system that needs to describe the operation of the laser, the prisms, and the mirrors, at least from the perspective of the observer, is defined and running, but only at a macroscopic level. It only needs to show the observer the things that are consistent with the expected performance of those components and the RLL laws of physics. So, for example, we can see the laser beam. But only when we need to determine something at a deeper level, like the path of a particular photon, is a finite state machine for that proton instantiated. And in these retrocausality experiments, like the delayed choice quantum eraser experiments, and this one done in Queensland, the FSMs only start when the observation is made, which is after the photon has gone through the apparatus; hence, it never really had a path. It didn’t need to. The path can be inferred later by measurement, but it is incorrect to think that that inference was objective reality. There was no path, and so there was no real deterministic order of operation.

There are only the attributes of the photon determined at measurement time, when its finite state machine comes into existence. Again, the photon is just data, described by the attributes of the finite state machine, so this makes complete sense. Programmatically, the FSM did not exist before the individuated consciousness required a measurement because it didn’t need to. Therefore, the inference of “which operation came first” is only that – an inference, not a true history.

So what is really going on?  There are at least three options:

1. Evidence is rewritten after the fact.  In other words, after the photons pass through the experimental apparatus, the System goes back and rewrites all records of the results, so as to create the non-causal anomaly.  Those records consist of the experimenters memories, as well as any written or recorded artifacts.  Since the System is in control of all of these items, the complete record of the past can be changed, and no one would ever know.

2. The System selects the operations to match the results, so as to generate the non-causal anomaly.

3. We live in an Observer-created reality and the entire sequence of events is either planned out or influenced by intent, and then just played out by the experimenter and students.

The point is that it requires a computational system to generate such anomalies; not the deterministic materialistic continuous system that mainstream science has taught us that we live in.

Mystery solved, Digital Consciousness style.

New Hints to How our Reality is Created

There is something fascinating going on in the world, hidden deep beneath the noise of Trump, soccer matches, and Game of Thrones. It is an exploration into the nature of reality – what is making the world tick?

To cut to the chase, it appears that our reality is being dynamically generated based on an ultra-sophisticated algorithm that takes into account not just the usual cause/effect context (as materialists believe), and conscious observation and intent (as idealists believe), but also a complex array of reality configuration probabilities so as to be optimally efficient.

Wait, what?

This philosophical journey has its origins in the well-known double slit experiment, originally done by Thomas Young in 1801 to determine that light had wavelike properties. In 1961, the experiment was performed with electrons, which also showed wavelike properties. The experimental setup involved shooting electrons through a screen containing two thin vertical slits. The wave nature of the particles was manifested in the form of an interference pattern on a screen that was placed on the other side of the double slit screen. It was a curious result but confirmed quantum theory. In 1974, the experiment was performed one electron at a time, with the same resulting interference pattern, which showed that it was not the electrons that interfered with each other, but rather a probabilistic spatial distribution function that was followed by the pattern on the screen. Quantum theory predicted that if a detector was placed at each of the slits so as to determine which slit each electron would go through, the interference pattern would disappear and just leave two vertical lines, due to the quantum complementarity principle. This was difficult to create in the lab, but experiments in the 1980s confirmed expectations – that the “which way did the particle go” measurement killed the interference pattern. The mystery was that the mere act of observation seemed to change the results of the experiment.

So, at this point, people who were interested in how the universe works effectively split into two camps, representing two fundamental philosophies that set the foundation for thinking, analysis, hypothesis, and theorizing:

  1. Objective Materialism
  2. Subjective Idealism

A zillion web pages can be found for each category.

The problem is that most scientists, and probably at least 99% of all outspoken science trolls believe in Materialism.  And “believe” is the operative word.  Because there is ZERO proof that Materialism is correct.  Nor is there proof that Idealism is correct.  So, “believe” is all that can be done.  Although, as the massive amount of evidence leans in favor of Idealism, it is fair to say that those believers at least have the scientific method behind them, whereas materialists just have “well gosh, it sure seems like we live in a deterministic world.” What is interesting is that Materialism can be falsified, but I’m not sure that Idealism can be.  The Materialist camp had plenty of theories to explain the paradox of the double slit experiments – alternative interpretations of quantum mechanics, local hidden variables, non-local hidden variables, a variety of loopholes, or simply the notion that the detector took energy from the particles and impacted the results of the experiment (as has been said, when you put a thermometer in a glass of water, you aren’t measuring the temperature of the water, you are measuring the temperature of the water with a thermometer in it.)

Over the years, the double-slit experiment has been progressively refined to the point where most of the materialistic arguments have been eliminated. For example, there is now the delayed choice quantum eraser experiment, which puts the “which way” detectors after the interference screen, making it impossible for the detector to physically interfere with the outcome of the experiment. And, one by one, all of the hidden variable possibilities and loopholes have been disproven. In 2015, several experiments were performed independently that closed all loopholes simultaneously with both photons and electrons. Since all of these various experimental tests over the years have shown that objective realism is false and non-local given the experimenters choices, the only other explanation could be what John Bell called Super-determinism, a universe completely devoid of free will, running like clockwork playing out a fully predetermined script of events. If true, this would bring about the extremely odd result that the universe is set up to ensure that the outcomes of these experiments imply the opposite to how the universe really works. But I digress…

The net result is that Materialism-based theories on reality are being chipped away experiment by experiment.  Those that believe in Materialist dogma are finding themselves being painted into an ever-shrinking philosophical corner. But Idealism-based theories are huge with possibilities, very few of which have been falsified experimentally.

Physicist and fellow digital philosopher, Tom Campbell, has boldly suggested a number of double slit experiments that can probe the nature of reality a little deeper. Tom, like me, believes that consciousness plays a key role in the nature of and creation of our reality. So much so that he believes that the outcome of the double slit experiments is due strictly to the conscious observation of the which-way detector data. In other words, if no human (or “sufficiently conscious” entity) observes the data, the interference pattern should remain. Theoretically, one could save the data to a file, store the file on a disk, hide the disk in a box and the interference pattern would remain on the screen. Open the box a day later and the interference pattern should automatically disappear, effectively rewriting history with the knowledge of the paths of the particles. His ideas have incurred the wrath of the physics trolls, who are quick to point out that regardless of the fact that humans ever read the data, the interference pattern is gone if the detectors record the data. The data can be destroyed, or not even written to a permanent medium, and the interference pattern would be gone. If these claims are true, it does not prove Materialism at all. But it does infer something very interesting.

From this and many many other categories of evidence it is strongly likely that our reality is dynamically being generated. Quantum entanglement, quantum zeno effect, and the observer effect all look very much like artifacts of an efficient system that dynamically creates reality as needed. It is the “as needed” part of this assertion that is most interesting. I shall refer to that which creates reality as “the system.”

Entanglement happens because when a two-particle-generating event occurs, it is efficient to create two particles using the same instance of a finite state machine and, therefore, when it is needed to determine the properties of one, the properties of the other are automatically known, as detailed in my blog post on entanglement. The quantum zeno effect happens because it is more efficient to reset the probability function each time an observation is made, as detailed in my blog post on quantum zeno. And so what about the double slit mystery? To illuminate, see the diagram below.

If the physicists are right, reality comes into existence at point 4 in the diagram. Why would that be? The paths of the particles are apparently not needed for the experience of the conscious observer, but rather to satisfy the consistency of the experiment. The fact that the detector registers the data is enough to create the reality. Perhaps the system “realizes” that it is less efficient to leave hanging experiments all over the place until a human “opens the envelope” than it is to instantiate real electron paths despite the unlikely possibility of data deletion. Makes logical sense to me. But it also indicates a sophisticated awareness of all of the probabilities of how the reality can play out out vis a vis potential human interactions.

The system is really smart.

Collapsing the Objective Collapse Theory

When I was a kid, I liked to collect things – coins, baseball cards, leaves, 45s, what have you. What made the category of collectible particularly enjoyable was the size and variety of the sample space. In my adult years, I’ve learned that collections have a downside – where to put everything? – especially as I continue to downsize my living space in trade for more fun locales, greater views, and better access to beaches, mountains, and wine bars. However, I do still sometimes maintain a collection, such as my collection of other people’s theories that attempt to explain quantum mechanics anomalies without letting go of objective materialism. Yeah, I know, not the most mainstream of collections, and certainly nothing I can sell on eBay, but way more fun than stamps.

The latest in this collection is a set of theories called “objective collapse” theories. These theories try to distance themselves from the ickyness (to materialists) of conscious observer-centric theories like the Copenhagen interpretation of quantum mechanics. They also attempt to avoid the ridiculousness of the exponentially explosive reality creation theories in the Many Worlds Interpretations (MWI) category. Essentially, the Objective Collapsers argue that there is a wave function describing the probabilities of properties of objects, but, rather than collapsing due to a measurement or a conscious observation, it collapses on its own due to some as yet undetermined, yet deterministic, process according to probabilities of the wave function.

Huh?

Yeah, I call BS on that. And point simply to the verification of the Quantum Zeno effect.  Particles don’t change state while they are under observation. When you stop observing them, then they change state, not at some random time prior, as the Objective Collapse theories would imply, but at the exact time that you stop observing them. In other words, the timing of the observation is correlated with wave function collapse, completely undermining the argument that it is probabilistic or deterministic according to some hidden variables. Other better-physics-educated individuals than I (aka physicists) have also called BS on Objective Collapse theories due to other things such as the conservation of energy violations. But, of course there is no shortage of physicists calling BS on other physicists’ theories. That, by itself, would make an entertaining collection.

In any case, I would be remiss if I didn’t remind the readers that the Digital Consciousness Theory completely explains all of this stuff. By “stuff,” I mean not just the anomalies, like the quantum zeno effect, entanglement, macroscopic coherence, the observer effect, and quantum retrocausality, but also the debates about microscopic vs. macroscopic, and thought experiments like the time that Einstein asked Abraham Pais whether he really believed that the moon existed only when looked at, to wit:

  • All we can know for sure is what we experience, which is subjective for every individual.
  • We effectively live in a virtual reality, operating in the context of a huge and highly complex digital substrate system. The purpose of this reality is for our individual consciousnesses to learn and evolve and contribute to the greater all-encompassing consciousness.
  • The reason that it feels “physical” or solid and not virtual is due to the consensus of experience that is built into the system.
  • This virtual reality is influenced and/or created by the conscious entities that occupy it (or “live in it” or “play in it”; chose your metaphor)
  • The virtual reality may have started prior to any virtual life developing, or it may have been suddenly spawned and initiated with us avatars representing the various life forms at any point in the past.
  • Some things in the reality need to be there to start; the universe, earth, water, air, and, in the case of the more recent invocation of reality, lots of other stuff. These things may easily be represented in a macroscopic way, because that is all that is needed in the system for the experience. Therefore, there is no need for us to create them.
  • However, other things are not necessary for our high level experience. But they are necessary once we probe the nature of reality, or if we aim to influence our reality. These are the things that are subject to the observer effect. They don’t exist until needed. Subatomic particles and their properties are perfect examples. As are the deep cause and effect relationships between reality elements that are necessary to create the changes that our intent is invoked to bring about.

So there is no need for objective collapse. Things are either fixed (the moon) or potential (the radioactive decay of a particle). The latter are called into existence as needed…

…Maybe

cat

The Observer Effect and Entanglement are Practically Requirements of Programmed Reality

Programmed Reality has been an incredibly successful concept in terms of explaining the paradoxes and anomalies of Quantum Mechanics, including non-Reality, non-Locality, the Observer Effect, Entanglement, and even the Retrocausality of John Wheeler’s Delayed Choice Quantum Eraser experiment.

I came up with those explanations by thinking about how Programmed Reality could explain such curiosities.

But I thought it might be interesting to view the problem in the reverse manner.  If one were to design a universe-simulating Program, what kinds of curiosities might result from an efficient design?  (Note: I fully realize that any entity advanced enough to simulate the universe probably has a computational engine that is far more advanced that we can even imagine; most definitely not of the von-Neumann variety.  Yet, we can only work with what we know, right?)

So, if I were to create such a thing, for instance, I would probably model data in the following manner:

For any space unobserved by a conscious entity, there is no sense in creating the reality for that space in advance.  It would unnecessarily consume too many resources.

For example, consider the cup of coffee on your desk.  Is it really necessary to model every single subatomic particle in the cup of coffee in order to interact with it in the way that we do?  Of course not.  The total amount of information contained in that cup of coffee necessary to stimulate our senses in the way that it does (generate the smell that it does; taste the way it does; feel the way it does as we drink it; swish around in the cup the way that it does; have the little nuances, like tiny bubbles, that make it look real; have the properties of cooling at the right rate to make sense, etc.) might be 10MB or so.  Yet, the total potential information content in a cup of coffee is 100,000,000,000 MB, so there is a ratio of perhaps 100 trillion in compression that can be applied to an ordinary object.

But once you decide to isolate an atom in that cup of coffee and observe it, the Program would then have to establish a definitive position for that atom, effectively resulting in the collapse of the wave function, or decoherence.  Moreover, the complete behavior of the atom, at that point, might be forever under control of the program.  After all, why delete the model once observed, in the event (probably fairly likely) that it will be observed again at some point in the future.  Thus, the atom would have to be described by a finite state machine.  It’s behavior would be decided by randomly picking values of the parameters that drive that behavior, such as atomic decay.  In other words, we have created a little mini finite state machine.

So, the process of “zooming in” on reality in the Program would have to result in exactly the type of behavior observed by quantum physicists.  In other words, in order to be efficient, resource-wise, the Program decoheres only the space and matter that it needs to.

Let’s say we zoom in on two particles at the same time; two that are in close proximity to each other.  Both would have to be decohered by the Program.  The decoherence would result in the creation of two mini finite state machines.  Using the same random number seed for both will cause the state machines to forever behave in an identical manner.

No matter how far apart you take the particles.  i.e…

Entanglement!

So, Observer Effect and Entanglement might both be necessary consequences of an efficient Programmed Reality algorithm.

 

coffee185 entanglement185

Rewriting the Past

“I don’t believe in yesterday, by the way.”
-John Lennon

The past is set in stone, right?  Everything we have learned tells us that you can not change the past, 88-MPH DeLoreans notwithstanding.

However, it would probably surprise you to learn that many highly respected scientists, as well as a few out on the fringe, are questioning that assumption, based on real evidence.

For example, leading stem cell scientist, Dr. Robert Lanza, posits that the past does not really exist until properly observed.  His theory of Biocentrism says that the past is just as malleable as the future.

Specific experiments in Quantum Mechanics appear to prove this conjecture.  In the “Delayed Choice Quantum Eraser” experiment, “scientists in France shot photons into an apparatus, and showed that what they did could retroactively change something that had already happened.” (Science 315, 966, 2007)

Paul Davies, renowned physicist from the Australian Centre for Astrobiology at Macquarie University in Sydney, suggests that conscious observers (us) can effectively reach back in history to “exert influence” on early events in the universe, including even the first moments of time.  As a result, the universe would be able to “fine-tune” itself to be suitable for life.

Prefer the Many Worlds Interpretation (MWI) of Quantum Mechanics over the Copenhagen one?  If that theory is correct, physicist Saibal Mitra from the University of Amsterdam has shown how we can change the past by forgetting.  Effectively if the collective observers memory is reset prior to some event, the state of the universe becomes “undetermined” and can follow a different path from before.  Check out my previous post on that one.

Alternatively, you can disregard the complexities of quantum mechanics entirely.  The results of some macro-level experiments twist our perceptions of reality even more.  Studies by Helmut Schmidt, Elmar Gruber, Brenda Dunne, Robert Jahn, and others have shown, for example, that humans are actually able to influence past events (aka retropsychokinesis, or RPK), such as pre-recorded (and previously unobserved) random number sequences

Benjamin Libet, pioneering scientist in the field of human consciousness at  the University of California, San Francisco is well known for his controversial experiments that seem to show reverse causality, or that the brain demonstrates awareness of actions that will occur in the near future.  To put it another way, actions that occur now create electrical brain activity in the past.

And then, of course, there is time travel.  Time travel into the future is a fact, just ask any astronaut, all of whom have traveled nanoseconds into the future as a side effect of high speed travel.  Stephen Hawking predicts much more significant time travel into the future.  In the future.  But what about the past?  Turns out there is nothing in the laws of physics that prevents it.  Theoretical physicist Kip Thorne designed a workable time machine that could send you into the past.  And traveling to the past of course provides an easy mechanism for changing it.  Unfortunately this requires exotic matter and a solution to the Grandfather paradox (MWI to the rescue again here).

None of this is a huge surprise to me, since I question everything about our conventional views of reality.  Consider the following scenario in a massively multiplayer online role playing game (MMORPG) or simulation.  The first time someone plays the game, or participates in the simulation, there is an assumed “past” to the construct of the game.  Components of that past may be found in artifacts (books, buried evidence, etc.) scattered throughout the game.  Let’s say that evidence reports that the Kalimdors and Northrendians were at war during year 1999.  But the evidence has yet to be found by a player.  A game patch could easily change the date to 2000, thereby changing the past and no one would be the wiser.  But, what if someone had found the artifact, thereby setting the past in stone.  That patch could still be applied, but it would only be effective if all players who had knowledge of the artifact were forced to forget.  Science fiction, right?  No longer, thanks to an emerging field of cognitive research.  Two years ago, scientists were able to erase selected memories in mice.  Insertion of false memories is not far behind.  This will eventually perfected, and applied to humans.

At some point in our future (this century), we will be able to snort up a few nanobots, which will archive our memories, download a new batch of memories to the starting state of a simulation, and run the simulation.  When it ends, the nanobots will restore our old memories.

Or maybe this happened at some point in our past and we are really living the simulation.  There is really no way to tell.

No wonder the past seems so flexible.

back_to_the_future_poster_224