Wigner’s Friend likes Digital Consciousness

Apparently your reality may be different than mine. Wait, what???

Several recent studies have demonstrated to an extremely high degree of certainty that objective reality does not exist. This year, adding to the mounting pile of evidence for a consciousness-centric reality, came the results of an experiment that for the first time, tested the highly paradoxical Wigner’s Friend thought experiment. The conclusion was that your reality and my reality can actually be different. I don’t mean different in the sense that your rods and cones have a different sensitivity, or that your brain interprets things differently, but fundamentally intrinsically different. Ultimately, things may happen in your reality that might not happen in my reality and vice versa.

Almost sounds like a dream, doesn’t it? Or like you and I are playing some kind of virtual reality game and the information stream that is coming into your senses via your headset or whatever is different that the information stream coming into mine.

BINGO! That’s Digital Consciousness in a nutshell.

Eugene Paul Wigner received the Nobel Prize for physics in 1963 for his work on quantum mechanics and the structure of the atom. More importantly perhaps, he, along with Max Planck, Neils Bohr, John Wheeler, Kurt Godel, Erwin Schrodinger, and many other forward thinking scientists and mathematicians, opposed the common materialistic worldview shared by most scientists of his day (not to mention, most scientists of today). As such, he was an inspiration for, and a forerunner of consciousness-centric philosophies, such as my Digital Consciousness, Donald Hoffman’s MUI theory, and Tom Campbell’s My Big TOE.

As if Schrodinger’s Cat wasn’t enough to bend people’s minds, Wigner raised the stakes of quantum weirdness in 1961 when he proposed a thought experiment, referred to as “Wigner’s Friend.” In the scenario are two people, let’s say Wigner and his friend. One of them is in an enclosed space, hidden to the other and observes something like Schrodinger’s cat, further hidden in a box. At the time Wigner opens the box the wave function collapses, establishing whether or not the cat is dead. But the cat is still in superposition to Wigner’s friend, outside of the entire subsystem. Only when he opens the door to see Wigner and the result of the cat experiment, does his wave function collapse. Therefore, Wigner and his friend have differing interpretations of when reality become realized; hence different realities.

Fast forward to 2019, and scientists (Massimiliano Proietti, Alexander Pickston, Francesco Graffitti, Peter Barrow, Dmytro Kundys, Cyril Branciard, Martin Ringbauer, and Alessandro Fedrizzi) at Heriot-Watt University in Edinburgh, were finally able to test the paradox using double slits, lasers, and polarizers. The results confirmed Wigner’s hypothesis to a certainty of 5 standard deviations, which essentially means that objective reality doesn’t exist, and your and my realities can differ- to a certainty of 1 in 3.5 million!

Of course, I live for this stuff, because it simply adds one more piece of supporting evidence to my theory, Digital Consciousness. And it adds yet another nail in the coffin of that ancient scientific religion, materialism.

How does it work?

Digital Consciousness asserts that consciousness is primary; hence, all that we can truly know is what we each experience subjectively.  This experiment doesn’t necessarily prove that the fundamental construct of reality is information, but it is a lot more plausible that individual experiences based on virtual simulations are at the root of this paradox rather than, say, a complex violation of Hilbert space, allowing parallel realities based on traditional physical fields to intermingle.  As an analogy, imagine that you are playing an MMORPG (video game with many other simultaneous players) – it isn’t difficult to see how each individual could be having a slightly different experience, based perhaps on their skill level or something.  As information is the carrier of the experience, the information entering the consciousness of one player could easily be slightly different than the information entering the consciousness of another player. This is by far the simplest explanation, and by Occam’s Razor, supports my theory.

Too bad Wigner isn’t alive to see this experiment, or to ponder Digital Consciousness theory. But I’m sure his consciousness is having a good laugh.

 

Quantum Retrocausality Explained

A recent quantum mechanics experiment, conducted at the University of Queensland in Australia, seems to defy causal order, baffling scientists. In this post however, I’ll explain why this isn’t anomalous at all; at least, if you come to accept the Digital Consciousness Theory (DCT) of reality. It boils down to a virtually identical explanation that I gave seven years ago to Daryl Bem’s seemingly anomalous precognition studies.

DCT says that subatomic particles are controlled by finite state machines (FSMs), which are tiny components of our Reality Learning Lab (RLL, aka “reality”).  These finite state machines that control the behavior of the atoms or photons in the experiment don’t really come into existence until the measurement is made, which effectively means that the atom or photon doesn’t really exist until it needs to. In RLL, the portion of the system that needs to describe the operation of the laser, the prisms, and the mirrors, at least from the perspective of the observer, is defined and running, but only at a macroscopic level. It only needs to show the observer the things that are consistent with the expected performance of those components and the RLL laws of physics. So, for example, we can see the laser beam. But only when we need to determine something at a deeper level, like the path of a particular photon, is a finite state machine for that proton instantiated. And in these retrocausality experiments, like the delayed choice quantum eraser experiments, and this one done in Queensland, the FSMs only start when the observation is made, which is after the photon has gone through the apparatus; hence, it never really had a path. It didn’t need to. The path can be inferred later by measurement, but it is incorrect to think that that inference was objective reality. There was no path, and so there was no real deterministic order of operation.

There are only the attributes of the photon determined at measurement time, when its finite state machine comes into existence. Again, the photon is just data, described by the attributes of the finite state machine, so this makes complete sense. Programmatically, the FSM did not exist before the individuated consciousness required a measurement because it didn’t need to. Therefore, the inference of “which operation came first” is only that – an inference, not a true history.

So what is really going on?  There are at least three options:

1. Evidence is rewritten after the fact.  In other words, after the photons pass through the experimental apparatus, the System goes back and rewrites all records of the results, so as to create the non-causal anomaly.  Those records consist of the experimenters memories, as well as any written or recorded artifacts.  Since the System is in control of all of these items, the complete record of the past can be changed, and no one would ever know.

2. The System selects the operations to match the results, so as to generate the non-causal anomaly.

3. We live in an Observer-created reality and the entire sequence of events is either planned out or influenced by intent, and then just played out by the experimenter and students.

The point is that it requires a computational system to generate such anomalies; not the deterministic materialistic continuous system that mainstream science has taught us that we live in.

Mystery solved, Digital Consciousness style.

Why the Universe Only Needs One Electron

According to renowned physicist Richard Feynman (recounted during his 1965 Nobel lecture)…

“I received a telephone call one day at the graduate college at Princeton from Professor Wheeler, in which he said, ‘Feynman, I know why all electrons have the same charge and the same mass.’ ‘Why?’ ‘Because, they are all the same electron!’”

John Wheeler’s idea was that this single electron moves through spacetime in a continuous world line like a big knot, while our observation of many identical but separate electrons is just an illusion because we only see a “slice” through that knot. Feynman was quick to point out a flaw in the idea; namely that if this was the case we should see as many positrons (electrons moving backward in time) as electrons, which we don’t.

But Wheeler, also known for his now accepted concepts like wormholes, quantum foam, and “it from bit”, may have been right on the money with this seemingly outlanish idea.

As I have amassed a tremendous set of evidence that our reality is digital and programmatic (some of which you can find here as well as many other blog posts), I will assume that to be the case and proceed from that assumption.

Next, we need to invoke the concept of a Finite State Machine (FSM), which is simply a computational system that is identified by a finite set of states whereby the rules that determine the next state are a function of the current state and one or more input events. The FSM may also generate a number of “outputs” which are also logical functions of the current state.

The following is an abstract example of a finite state machine:

A computational system, like that laptop on your desk that the cat sits on, is by itself a finite state machine. Each clock cycle gives the system a chance to compute a new state, which is defined by a logical combination of the current state and all of the input changes. A video game, a flight simulator, and a trading system all work the same way. The state changes in a typical laptop about 4 billion times per second. It may actually take many of these 250 picosecond clock cycles to result in an observable difference in the output of the program, such as the movement of your avatar on the screen. Within the big complex laptop finite state machines are many others running, such as each of those dozens or hundreds of processes that you see running when you click on your “activity monitor.” And within each of those FSMs are many others, such as the method (or “sub program”) that is invoked when it is necessary to generate the appearance of a new object on the screen.

There is also a concept in computer science called an “instance.” It is similar to the idea of a template. As an analogy, consider the automobile. Every Honda that rolls off the assembly line is different, even if it is the same model with the same color and same set of options. The reason it is different from another with the exact same specifications is that there are microscopic differences in every part that goes into each car. In fact, there are differences in the way that every part is connected between two cars of equal specifications. However, imagine if every car were exactly the same, down to the molecule, atom, particle, string, or what have you. Then we could say that each car is an instance of its template.

This would also be the case in a computer-based virtual reality. Every similar car generated in the computer program is an instance of the computer model of that car, which, by the way, is a finite state machine. Each instance can be given different attributes, however, such as color, loudness, or power. In some cases, such as a virtual racing game where the idea of a car is central to the game, each car may be rather unique in the way that it behaves, or responds to the inputs from the controller, so there may be many different FSMs for these different types of cars. However, for any program, there will be FSMs that are so fundamental that there only needs to be one of that type of object; for example, a leaf.

In our programmatic reality (what I like to call the Reality Learning Lab, or RLL), there are also FSMs that are so fundamental that there only needs to be one FSM for that type of object. And every object of that type is merely an instance of that FSM. Such as an electron.

An electron is fundamental. It is a perfect example of an object that should be modeled by a finite state machine. There is no reason for any two electrons to have different rules of behavior. They may have different starting conditions and different influences throughout their lifetime, but they would react to those conditions and influences with exactly the same rules. Digital Consciousness Theory provides the perfect explanation for this. Electrons are simply instances of the electron finite state machine. There is only one FSM for the electron, just as Wheeler suspected. But there are many instances of it. Each RLL clock cycle will result in the update of the state of each electron instance in our apparent physical reality.

So, in a very real sense, Wheeler was right. There is no need for anything other than the single electron FSM. All of the electrons that we experience are just instances and follow exactly the same rules. Anything else would be inefficient, and ATTI is the ultimate in efficiency.

 

New Hints to How our Reality is Created

There is something fascinating going on in the world, hidden deep beneath the noise of Trump, soccer matches, and Game of Thrones. It is an exploration into the nature of reality – what is making the world tick?

To cut to the chase, it appears that our reality is being dynamically generated based on an ultra-sophisticated algorithm that takes into account not just the usual cause/effect context (as materialists believe), and conscious observation and intent (as idealists believe), but also a complex array of reality configuration probabilities so as to be optimally efficient.

Wait, what?

This philosophical journey has its origins in the well-known double slit experiment, originally done by Thomas Young in 1801 to determine that light had wavelike properties. In 1961, the experiment was performed with electrons, which also showed wavelike properties. The experimental setup involved shooting electrons through a screen containing two thin vertical slits. The wave nature of the particles was manifested in the form of an interference pattern on a screen that was placed on the other side of the double slit screen. It was a curious result but confirmed quantum theory. In 1974, the experiment was performed one electron at a time, with the same resulting interference pattern, which showed that it was not the electrons that interfered with each other, but rather a probabilistic spatial distribution function that was followed by the pattern on the screen. Quantum theory predicted that if a detector was placed at each of the slits so as to determine which slit each electron would go through, the interference pattern would disappear and just leave two vertical lines, due to the quantum complementarity principle. This was difficult to create in the lab, but experiments in the 1980s confirmed expectations – that the “which way did the particle go” measurement killed the interference pattern. The mystery was that the mere act of observation seemed to change the results of the experiment.

So, at this point, people who were interested in how the universe works effectively split into two camps, representing two fundamental philosophies that set the foundation for thinking, analysis, hypothesis, and theorizing:

  1. Objective Materialism
  2. Subjective Idealism

A zillion web pages can be found for each category.

The problem is that most scientists, and probably at least 99% of all outspoken science trolls believe in Materialism.  And “believe” is the operative word.  Because there is ZERO proof that Materialism is correct.  Nor is there proof that Idealism is correct.  So, “believe” is all that can be done.  Although, as the massive amount of evidence leans in favor of Idealism, it is fair to say that those believers at least have the scientific method behind them, whereas materialists just have “well gosh, it sure seems like we live in a deterministic world.” What is interesting is that Materialism can be falsified, but I’m not sure that Idealism can be.  The Materialist camp had plenty of theories to explain the paradox of the double slit experiments – alternative interpretations of quantum mechanics, local hidden variables, non-local hidden variables, a variety of loopholes, or simply the notion that the detector took energy from the particles and impacted the results of the experiment (as has been said, when you put a thermometer in a glass of water, you aren’t measuring the temperature of the water, you are measuring the temperature of the water with a thermometer in it.)

Over the years, the double-slit experiment has been progressively refined to the point where most of the materialistic arguments have been eliminated. For example, there is now the delayed choice quantum eraser experiment, which puts the “which way” detectors after the interference screen, making it impossible for the detector to physically interfere with the outcome of the experiment. And, one by one, all of the hidden variable possibilities and loopholes have been disproven. In 2015, several experiments were performed independently that closed all loopholes simultaneously with both photons and electrons. Since all of these various experimental tests over the years have shown that objective realism is false and non-local given the experimenters choices, the only other explanation could be what John Bell called Super-determinism, a universe completely devoid of free will, running like clockwork playing out a fully predetermined script of events. If true, this would bring about the extremely odd result that the universe is set up to ensure that the outcomes of these experiments imply the opposite to how the universe really works. But I digress…

The net result is that Materialism-based theories on reality are being chipped away experiment by experiment.  Those that believe in Materialist dogma are finding themselves being painted into an ever-shrinking philosophical corner. But Idealism-based theories are huge with possibilities, very few of which have been falsified experimentally.

Physicist and fellow digital philosopher, Tom Campbell, has boldly suggested a number of double slit experiments that can probe the nature of reality a little deeper. Tom, like me, believes that consciousness plays a key role in the nature of and creation of our reality. So much so that he believes that the outcome of the double slit experiments is due strictly to the conscious observation of the which-way detector data. In other words, if no human (or “sufficiently conscious” entity) observes the data, the interference pattern should remain. Theoretically, one could save the data to a file, store the file on a disk, hide the disk in a box and the interference pattern would remain on the screen. Open the box a day later and the interference pattern should automatically disappear, effectively rewriting history with the knowledge of the paths of the particles. His ideas have incurred the wrath of the physics trolls, who are quick to point out that regardless of the fact that humans ever read the data, the interference pattern is gone if the detectors record the data. The data can be destroyed, or not even written to a permanent medium, and the interference pattern would be gone. If these claims are true, it does not prove Materialism at all. But it does infer something very interesting.

From this and many many other categories of evidence it is strongly likely that our reality is dynamically being generated. Quantum entanglement, quantum zeno effect, and the observer effect all look very much like artifacts of an efficient system that dynamically creates reality as needed. It is the “as needed” part of this assertion that is most interesting. I shall refer to that which creates reality as “the system.”

Entanglement happens because when a two-particle-generating event occurs, it is efficient to create two particles using the same instance of a finite state machine and, therefore, when it is needed to determine the properties of one, the properties of the other are automatically known, as detailed in my blog post on entanglement. The quantum zeno effect happens because it is more efficient to reset the probability function each time an observation is made, as detailed in my blog post on quantum zeno. And so what about the double slit mystery? To illuminate, see the diagram below.

If the physicists are right, reality comes into existence at point 4 in the diagram. Why would that be? The paths of the particles are apparently not needed for the experience of the conscious observer, but rather to satisfy the consistency of the experiment. The fact that the detector registers the data is enough to create the reality. Perhaps the system “realizes” that it is less efficient to leave hanging experiments all over the place until a human “opens the envelope” than it is to instantiate real electron paths despite the unlikely possibility of data deletion. Makes logical sense to me. But it also indicates a sophisticated awareness of all of the probabilities of how the reality can play out out vis a vis potential human interactions.

The system is really smart.

Disproving the Claim that the LHC Disproves the Existence of Ghosts

Recent articles in dozens of online magazines shout things like: “The LHC Disproves the Existence of Ghosts and the Paranormal.”

To which I respond: LOLOLOLOLOL

There are so many things wrong with this backwards scientific thinking, I almost don’t know where to start.  But here are a few…

1. The word “disproves” doesn’t belong here. It is unscientific at best. Maybe use “evidence against one possible explanation for ghosts” – I can even begin to appreciate that. But if I can demonstrate even one potential mechanism for the paranormal that the LHC couldn’t detect, you cannot use the word “disprove.” And here is one potential mechanism – an unknown force that the LHC can’t explore because its experiments are designed to only measure interactions in the 4 forces physicists are aware of.

The smoking gun is Brian Cox’s statement “If we want some sort of pattern that carries information about our living cells to persist then we must specify precisely what medium carries that pattern and how it interacts with the matter particles out of which our bodies are made. We must, in other words, invent an extension to the Standard Model of Particle Physics that has escaped detection at the Large Hadron Collider. That’s almost inconceivable at the energy scales typical of the particle interactions in our bodies.” So, based on that statement, here are a few more problems…

2. “almost inconceivable” is logically inconsistent with the term “disproves.”

3. “If we want some sort of pattern that carries information about our living cells to persist…” is an invalid assumption. We do not need information about our cells to persist in a traditional physical medium for paranormal effects to have a way to propagate. They can propagate by a non-traditional (unknown) medium, such as an information storage mechanism operating outside of our classically observable means. Imagine telling a couple of scientists just 200 years ago about how people can communicate instantaneously via radio waves. Their response would be “no, that is impossible because our greatest measurement equipment has not revealed any mechanism that allows information to be transmitted in that manner.” Isn’t that the same thing Brian Cox is saying?

4. The underlying assumption is that we live in a materialist reality. Aside from the fact that Quantum Mechanics experiments have disproven this (and yes, I am comfortable using that word), a REAL scientist should allow for the possibility that consciousness is independent of grey matter and create experiments to support or invalidate such hypotheses. One clear possibility is the simulation argument. Out of band signaling is an obvious and easy mechanism for paranormal effects.  Unfortunately, the REAL scientists (such as Anton Zeilinger) are not the ones who get most of the press.

5. “That’s almost inconceivable at the energy scales typical of the particle interactions in our bodies” is also bad logic. It assumes that we fully understand the energy scales typical of the particle interactions in our bodies. If scientific history has shown us anything, it is that there is more that we don’t understand than there is that we do.

lhcghosts

Collapsing the Objective Collapse Theory

When I was a kid, I liked to collect things – coins, baseball cards, leaves, 45s, what have you. What made the category of collectible particularly enjoyable was the size and variety of the sample space. In my adult years, I’ve learned that collections have a downside – where to put everything? – especially as I continue to downsize my living space in trade for more fun locales, greater views, and better access to beaches, mountains, and wine bars. However, I do still sometimes maintain a collection, such as my collection of other people’s theories that attempt to explain quantum mechanics anomalies without letting go of objective materialism. Yeah, I know, not the most mainstream of collections, and certainly nothing I can sell on eBay, but way more fun than stamps.

The latest in this collection is a set of theories called “objective collapse” theories. These theories try to distance themselves from the ickyness (to materialists) of conscious observer-centric theories like the Copenhagen interpretation of quantum mechanics. They also attempt to avoid the ridiculousness of the exponentially explosive reality creation theories in the Many Worlds Interpretations (MWI) category. Essentially, the Objective Collapsers argue that there is a wave function describing the probabilities of properties of objects, but, rather than collapsing due to a measurement or a conscious observation, it collapses on its own due to some as yet undetermined, yet deterministic, process according to probabilities of the wave function.

Huh?

Yeah, I call BS on that. And point simply to the verification of the Quantum Zeno effect.  Particles don’t change state while they are under observation. When you stop observing them, then they change state, not at some random time prior, as the Objective Collapse theories would imply, but at the exact time that you stop observing them. In other words, the timing of the observation is correlated with wave function collapse, completely undermining the argument that it is probabilistic or deterministic according to some hidden variables. Other better-physics-educated individuals than I (aka physicists) have also called BS on Objective Collapse theories due to other things such as the conservation of energy violations. But, of course there is no shortage of physicists calling BS on other physicists’ theories. That, by itself, would make an entertaining collection.

In any case, I would be remiss if I didn’t remind the readers that the Digital Consciousness Theory completely explains all of this stuff. By “stuff,” I mean not just the anomalies, like the quantum zeno effect, entanglement, macroscopic coherence, the observer effect, and quantum retrocausality, but also the debates about microscopic vs. macroscopic, and thought experiments like the time that Einstein asked Abraham Pais whether he really believed that the moon existed only when looked at, to wit:

  • All we can know for sure is what we experience, which is subjective for every individual.
  • We effectively live in a virtual reality, operating in the context of a huge and highly complex digital substrate system. The purpose of this reality is for our individual consciousnesses to learn and evolve and contribute to the greater all-encompassing consciousness.
  • The reason that it feels “physical” or solid and not virtual is due to the consensus of experience that is built into the system.
  • This virtual reality is influenced and/or created by the conscious entities that occupy it (or “live in it” or “play in it”; chose your metaphor)
  • The virtual reality may have started prior to any virtual life developing, or it may have been suddenly spawned and initiated with us avatars representing the various life forms at any point in the past.
  • Some things in the reality need to be there to start; the universe, earth, water, air, and, in the case of the more recent invocation of reality, lots of other stuff. These things may easily be represented in a macroscopic way, because that is all that is needed in the system for the experience. Therefore, there is no need for us to create them.
  • However, other things are not necessary for our high level experience. But they are necessary once we probe the nature of reality, or if we aim to influence our reality. These are the things that are subject to the observer effect. They don’t exist until needed. Subatomic particles and their properties are perfect examples. As are the deep cause and effect relationships between reality elements that are necessary to create the changes that our intent is invoked to bring about.

So there is no need for objective collapse. Things are either fixed (the moon) or potential (the radioactive decay of a particle). The latter are called into existence as needed…

…Maybe

cat

Comments on the Possibilist Transactional Interpretation of Quantum Mechanics, aka Models vs. Reality

Reality is what it is. Everything else is just a model.

From Plato to Einstein to random humans like myself, we are all trying to figure out what makes this world tick. Sometimes I think I get it pretty well, but I know that I am still a product of my times, and therefore my view of reality is seen through the lens of today’s technology and state of scientific advancement. As such, I would be a fool to think that I have it all figured out. As should everyone else.

At one point in our recent past, human scientific endeavor wasn’t so humble. Just a couple hundred years ago, we thought that atoms were the ultimate building blocks of reality and everything could be ultimately described by equations of mechanics. How naïve that was, as 20th century physics made abundantly clear. But even then, the atom-centric view of physics was not reality. It was simply a model. So is every single theory and equation that we use today, regardless of whether it is called a theory or a law: Relativistic motion, Schrodinger’s equation, String Theory, the 2nd Law of Thermodynamics – all models of some aspect of reality.

We seek to understand our world and derive experiments that push forward that knowledge. As a result of the experiments, we define models to best fit the data.

One of the latest comes from quantum physicist Ruth Kastner in the form of a model that better explains the anomalies of quantum mechanics. She calls the model the Possibilist Transactional Interpretation of Quantum Mechanics (PTI), an updated version of John Cramer’s Transactional Interpretation of Quantum Mechanics (TIQM, or TI for short) proposed in 1986. The transactional nature of the theory comes from the idea that the wavefunction collapse behaves like a transaction in that there is an “offer” from an “emitter” and a “confirmation” from an “absorber.” In the PTI enhancement, the offers and confirmations are considered to be outside of normal spacetime and therefore the wavefunction collapse creates spacetime rather than occurs within it. Apparently, this helps to explain some existing anomalies, like uncertainty and entanglement.

This is all cool and seems to serve to enhance our understanding of how QM works. However, it is STILL just a model, and a fairly high level one at that. And all models are approximations, approximating a description of reality that most closely matches experimental evidence.

Underneath all models exist deeper models (e.g. string theory), many as yet to be supported by real evidence. Underneath those models may exist even deeper models. Consider this layering…

Screen Shot 2015-09-29 at 8.18.55 PM

Every layer contains models that may be considered to be progressively closer to reality. Each layer can explain the layer above it. But it isn’t until you get to the bottom layer that you can say you’ve hit reality. I’ve identified that layer as “digital consciousness”, the working title for my next book. It may also turn out to be a model, but it feels like it is distinctly different from the other layers in that, by itself, it is no longer an approximation of reality, but rather a complete and comprehensive yet elegantly simple framework that can be used to describe every single aspect of reality.

For example, in Digital Consciousness, everything is information. The “offer” is then “the need to collapse the wave function based on the logic that there is now an existing conscious observer who depends on it.” The “confirmation” is the collapse – the decision made from probability space that defines positions, spins, etc. This could also be seen as the next state of the state machine that defines such behavior. The emitter and absorber are both parts of the “system”, the global consciousness that is “all that there is.” So, if experimental evidence ultimately demonstrates that PTI is a more accurate interpretation of QM, it will nonetheless still be a model and an approximation. The bottom layer is where the truth is.

Elvidge’s Postulate of Countable Interpretations of QM…

The number of intepretations of Quantum Mechanics always exceeds the number of physicists.

Let’s count the various “interpretations” of quantum mechanics:

  • Bohm (aka Causal, or Pilot-wave)
  • Copenhagen
  • Cosmological
  • Ensemble
  • Ghirardi-Rimini-Weber
  • Hidden measurements
  • Many-minds
  • Many-worlds (aka Everett)
  • Penrose
  • Possibilist Transactional (PTI)
  • Relational (RQM)
  • Stochastic
  • Transactional (TIQM)
  • Von Neumann-Wigner
  • Digital Consciousness (DCI, aka Elvidge)

Unfortunately you won’t find the last one in Wikipedia. Give it about 30 years.

istock_000055801128_small-7dde4d0d485dcfc0d5fe4ab9e600bfef080121d0-s800-c85