Wigner’s Friend likes Digital Consciousness

Apparently your reality may be different than mine. Wait, what???

Several recent studies have demonstrated to an extremely high degree of certainty that objective reality does not exist. This year, adding to the mounting pile of evidence for a consciousness-centric reality, came the results of an experiment that for the first time, tested the highly paradoxical Wigner’s Friend thought experiment. The conclusion was that your reality and my reality can actually be different. I don’t mean different in the sense that your rods and cones have a different sensitivity, or that your brain interprets things differently, but fundamentally intrinsically different. Ultimately, things may happen in your reality that might not happen in my reality and vice versa.

Almost sounds like a dream, doesn’t it? Or like you and I are playing some kind of virtual reality game and the information stream that is coming into your senses via your headset or whatever is different that the information stream coming into mine.

BINGO! That’s Digital Consciousness in a nutshell.

Eugene Paul Wigner received the Nobel Prize for physics in 1963 for his work on quantum mechanics and the structure of the atom. More importantly perhaps, he, along with Max Planck, Neils Bohr, John Wheeler, Kurt Godel, Erwin Schrodinger, and many other forward thinking scientists and mathematicians, opposed the common materialistic worldview shared by most scientists of his day (not to mention, most scientists of today). As such, he was an inspiration for, and a forerunner of consciousness-centric philosophies, such as my Digital Consciousness, Donald Hoffman’s MUI theory, and Tom Campbell’s My Big TOE.

As if Schrodinger’s Cat wasn’t enough to bend people’s minds, Wigner raised the stakes of quantum weirdness in 1961 when he proposed a thought experiment, referred to as “Wigner’s Friend.” In the scenario are two people, let’s say Wigner and his friend. One of them is in an enclosed space, hidden to the other and observes something like Schrodinger’s cat, further hidden in a box. At the time Wigner opens the box the wave function collapses, establishing whether or not the cat is dead. But the cat is still in superposition to Wigner’s friend, outside of the entire subsystem. Only when he opens the door to see Wigner and the result of the cat experiment, does his wave function collapse. Therefore, Wigner and his friend have differing interpretations of when reality become realized; hence different realities.

Fast forward to 2019, and scientists (Massimiliano Proietti, Alexander Pickston, Francesco Graffitti, Peter Barrow, Dmytro Kundys, Cyril Branciard, Martin Ringbauer, and Alessandro Fedrizzi) at Heriot-Watt University in Edinburgh, were finally able to test the paradox using double slits, lasers, and polarizers. The results confirmed Wigner’s hypothesis to a certainty of 5 standard deviations, which essentially means that objective reality doesn’t exist, and your and my realities can differ- to a certainty of 1 in 3.5 million!

Of course, I live for this stuff, because it simply adds one more piece of supporting evidence to my theory, Digital Consciousness. And it adds yet another nail in the coffin of that ancient scientific religion, materialism.

How does it work?

Digital Consciousness asserts that consciousness is primary; hence, all that we can truly know is what we each experience subjectively.  This experiment doesn’t necessarily prove that the fundamental construct of reality is information, but it is a lot more plausible that individual experiences based on virtual simulations are at the root of this paradox rather than, say, a complex violation of Hilbert space, allowing parallel realities based on traditional physical fields to intermingle.  As an analogy, imagine that you are playing an MMORPG (video game with many other simultaneous players) – it isn’t difficult to see how each individual could be having a slightly different experience, based perhaps on their skill level or something.  As information is the carrier of the experience, the information entering the consciousness of one player could easily be slightly different than the information entering the consciousness of another player. This is by far the simplest explanation, and by Occam’s Razor, supports my theory.

Too bad Wigner isn’t alive to see this experiment, or to ponder Digital Consciousness theory. But I’m sure his consciousness is having a good laugh.

 

Will Evolving Minds Delay The AI Apocalypse? – Part II

The idea of an AI-driven Apocalypse is based on AI outpacing humanity in intelligence. The point at which that might happen depends on how fast AI evolves and how fast (or slow) humanity evolves.

In Part I of this article, I demonstrated how, given current trends in the advancement of Artificial Intelligence, any AI Apocalypse, Singularity, or what have you, is probably much further out that the transhumanists would have you believe.

In this part, we will examine the other half of the argument by considering the nature of the human mind and how it evolves. To do so, it is very instructive to consider the nature of the mind as a complex system and also the systemic nature of the environments that minds and AIs engage with, and are therefore measured by in terms of general intelligence or AGI.

David Snowden has developed a framework of categorizing systems called Cynefin. The four types of systems are:

  1. Simple – e.g. a bicycle. A Simple system is a simple deterministic system characterized by the fact that most anyone can make decisions and solve problems regarding such systems – it takes something called inferential intuition, which we all have. If the bicycle seat is loose, everyone knows that to fix it, you must look under the seat and find the hardware that needs tightening.
  2. Complicated – e.g. a car. Complicated systems are also deterministic systems, but unlike Simple systems, solutions to problems in this domain are not obvious and typically require analysis and/or experts to figure out what is wrong. That’s why you take your car to the mechanic and why we need software engineers to fix defects.
  3. Complex – Complex systems, while perhaps deterministic from a philosophical point of view, are not deterministic in any practical sense. No matter how much analysis you apply and no matter how experienced the expert is, they will not be able to completely analyze and solve a problem in a complex system. That is because such systems are subject to an incredibly complex set of interactions, inputs, dependencies, and feedback paths that all change continuously. So even if you could apply sufficient resources toward analyzing the entire system, by the time you got your result, your problem state would be obsolete. Examples of complex systems include ecosystems, traffic patterns, the stock market, and basically every single human interaction. Complex systems are best addressed through holistic intuition, which is something that humans possess when they are very experienced in the applicable domain. Problems in complex systems are best addressed by a method called Probe-Sense-Respond, which consists of probing (doing an experiment designed intuitively), sensing (observing the results of that experiment), and responding (acting on those results by moving the system in a positive direction).
  4. Chaotic – Chaotic systems are rarely occurring situations that are unpredictable because they are novel and therefore don’t follow any known patterns. An example would be the situation in New York City after 9/11. Responding to chaotic systems requires an even different method than with other types of systems. Typically, just taking some definitive form of action may be enough to move the system from Chaotic to Complex. The choice of action is a deeply intuitive decision that may be based on an incredibly deep, rich, and nuanced set of knowledge and experiences.

Complicated systems are ideal for early AI. Problems like the ones analyzed in Stanford’s AI Index, such as object detection, natural language parsing, language translation, speech recognition, theorem proving, and SAT solving are all Complicated systems. AI technology at the moment is focused mostly on such problems, not things in the Complex domain, which are instead best addressed by the human brain. However, as processing speed evolves, and learning algorithms evolve, AI will start addressing issues in the Complex domain. Initially, to program or guide the AI systems toward a good sense-and-respond model a human mind will be needed. Eventually perhaps, armed with vague instructions like “try intuitive experiments from a large set of creative ideas that may address the issue,” “figure out how to identify the metrics that indicate a positive result from the experiment,” “measure those metrics,” and “choose a course of action that furthers the positive direction of the quality of the system,” an AI may succeed at addressing problems in the Complex domain.

The human mind of course already has a huge head start. We are incredibly adept at seeing vague patterns, sensing the non-obvious, seeing the big picture, and drawing from collective experiences to select experiments to address complex problems.

Back to our original question, as we lead AI toward developing the skills and intuition to replicate such capabilities, will we be unable to evolve our thinking as well?

In the materialist paradigm, the brain is the limit for an evolving mind. This is why we think the AI can out evolve us, because the brain capacity is fixed. However, in “Digital Consciousness” I have presented a tremendous set of evidence that this is incorrect. In actuality, consciousness, and therefore the mind, is not emergent from the brain. Instead it exists in a deeper level of reality as shown in the Figure below.

It interacts with a separate piece of ATTI that I call the Reality Learning Lab (RLL), commonly known as “the reality we live in,” but more accurately described as our “apparent physical reality” – “apparent” because it is actually Virtual.

As discussed in my blog on creating souls, All That There Is (ATTI) has subdivided itself into components of individuated consciousness, each of which has a purpose to evolve. How it is constructed, and how the boundaries are formed that make it individuated is beyond our knowledge (at the moment).

So what then is our mind?

Simply put, it is organized information. As Tom Campbell eloquently expressed it, “The digital world, which subsumes the virtual physical world, consists only of organization – nothing else. Reality is organized bits.”

As such, what prevents it from evolving in the deeper reality of ATTI just as fast as we can evolve an AI here in the virtual reality of RLL?

Answer – NOTHING!

Don’t get hung up on the fixed complexity of the brain. All our brain is needed for is to emulate the processing mechanism that appears to handle sensory input and mental activity. By analogy, we might consider playing a virtual reality game. In this game we have an avatar and we need to interact with other players. Imagine that a key aspect of the game is the ability to throw a spear at a monster or to shoot an enemy. In our (apparent) physical reality, we would need an arm and a hand to be able to carry out that activity. But in the game, it is technically not required. Our avatar could be arm-less and when we have the need to throw something, we simply press a key sequence on the keyboard. A spear magically appears and gets hurled in the direction of the monster. Just as we don’t need a brain to be aware in our waking reality (because our consciousness is separate from RLL), we don’t need an arm to project a spear toward an enemy in the VR game.

On the other hand, having the arm on the avatar adds a great deal to the experience. For one thing, it adds complexity and meaning to the game. Pressing a key sequence does not have a lot of variability and it certainly doesn’t provide the player with much control. The ability to hit the target could be very precise, such as in the case where you simply point at the target and hit the key sequence. This is boring, requires little skill and ultimately provides no opportunity to develop a skill. On the other hand, the precision of your attack could be dependent on a random number generator, which adds complexity and variability to the game, but still doesn’t provide any opportunity to improve. Or, the precision of the attack could depend on some other nuance of the game, like secondary key sequences, or timing of key sequences, which, although providing the opportunity to develop a skill, have nothing to do with a consistent approach to throwing something. So, it is much better to have your avatar have an arm. In addition, this simply models the reality that you know, and people are comfortable with things that are familiar.

So it is with our brains. In our virtual world, the digital template that is our brain is incapable of doing anything in the “simulation” that it isn’t designed to do. The digital simulation that is the RLL must follow the rules of RLL physics much the way a “physics engine” provides the rules of RLL physics for a computer game. And these rules extend to brain function. Imagine if, in the 21st century, we had no scientific explanation for how we process sensory input or make mental decisions because there was no brain in our bodies. Would that be a “reality” that we could believe in? So, in our level of reality that we call waking reality, we need a brain.

But that brain “template” doesn’t limit the ability for our mind to evolve any more than the lack of brain or central nervous system prevents a collection of single celled organisms called a slime mold from actually learning.

In fact, there is some good evidence for the idea that our minds are evolving as rapidly as technology. Spiral Dynamics is a model of the evolution of values and culture that can be applied to individuals, institutions, and all of humanity. The figure below depicts a very high level overview of the stages, or memes, depicted by the model.

Spiral Dynamics

Each of these stages represents a shift in values, culture, and thinking, as compared to the previous. Given that it is the human mind that drives these changes, it is fair to say that the progression models the evolution of the human mind. As can be seen by the timeframes associated with the first appearance of each stage of humanity, this is an exponential progression. In fact, this is the same kind of progression that Transhumanists used to prove exponential progression of technology and AI. This exponential progression of mind would seem to defy the logic that our minds, if based on fixed neurological wiring, are incapable of exponential development.

And so, higher level conscious thought and logic can easily evolve in the human mind in the truer reality, which may very well keep us ahead of the AI that we are creating in our little virtual reality. The trick is in letting go of our limiting assumptions that it cannot be done, and developing protocols for mental evolution.

So, maybe hold off on buying those front row tickets to the Singularity.

Why the Universe Only Needs One Electron

According to renowned physicist Richard Feynman (recounted during his 1965 Nobel lecture)…

“I received a telephone call one day at the graduate college at Princeton from Professor Wheeler, in which he said, ‘Feynman, I know why all electrons have the same charge and the same mass.’ ‘Why?’ ‘Because, they are all the same electron!’”

John Wheeler’s idea was that this single electron moves through spacetime in a continuous world line like a big knot, while our observation of many identical but separate electrons is just an illusion because we only see a “slice” through that knot. Feynman was quick to point out a flaw in the idea; namely that if this was the case we should see as many positrons (electrons moving backward in time) as electrons, which we don’t.

But Wheeler, also known for his now accepted concepts like wormholes, quantum foam, and “it from bit”, may have been right on the money with this seemingly outlanish idea.

As I have amassed a tremendous set of evidence that our reality is digital and programmatic (some of which you can find here as well as many other blog posts), I will assume that to be the case and proceed from that assumption.

Next, we need to invoke the concept of a Finite State Machine (FSM), which is simply a computational system that is identified by a finite set of states whereby the rules that determine the next state are a function of the current state and one or more input events. The FSM may also generate a number of “outputs” which are also logical functions of the current state.

The following is an abstract example of a finite state machine:

A computational system, like that laptop on your desk that the cat sits on, is by itself a finite state machine. Each clock cycle gives the system a chance to compute a new state, which is defined by a logical combination of the current state and all of the input changes. A video game, a flight simulator, and a trading system all work the same way. The state changes in a typical laptop about 4 billion times per second. It may actually take many of these 250 picosecond clock cycles to result in an observable difference in the output of the program, such as the movement of your avatar on the screen. Within the big complex laptop finite state machines are many others running, such as each of those dozens or hundreds of processes that you see running when you click on your “activity monitor.” And within each of those FSMs are many others, such as the method (or “sub program”) that is invoked when it is necessary to generate the appearance of a new object on the screen.

There is also a concept in computer science called an “instance.” It is similar to the idea of a template. As an analogy, consider the automobile. Every Honda that rolls off the assembly line is different, even if it is the same model with the same color and same set of options. The reason it is different from another with the exact same specifications is that there are microscopic differences in every part that goes into each car. In fact, there are differences in the way that every part is connected between two cars of equal specifications. However, imagine if every car were exactly the same, down to the molecule, atom, particle, string, or what have you. Then we could say that each car is an instance of its template.

This would also be the case in a computer-based virtual reality. Every similar car generated in the computer program is an instance of the computer model of that car, which, by the way, is a finite state machine. Each instance can be given different attributes, however, such as color, loudness, or power. In some cases, such as a virtual racing game where the idea of a car is central to the game, each car may be rather unique in the way that it behaves, or responds to the inputs from the controller, so there may be many different FSMs for these different types of cars. However, for any program, there will be FSMs that are so fundamental that there only needs to be one of that type of object; for example, a leaf.

In our programmatic reality (what I like to call the Reality Learning Lab, or RLL), there are also FSMs that are so fundamental that there only needs to be one FSM for that type of object. And every object of that type is merely an instance of that FSM. Such as an electron.

An electron is fundamental. It is a perfect example of an object that should be modeled by a finite state machine. There is no reason for any two electrons to have different rules of behavior. They may have different starting conditions and different influences throughout their lifetime, but they would react to those conditions and influences with exactly the same rules. Digital Consciousness Theory provides the perfect explanation for this. Electrons are simply instances of the electron finite state machine. There is only one FSM for the electron, just as Wheeler suspected. But there are many instances of it. Each RLL clock cycle will result in the update of the state of each electron instance in our apparent physical reality.

So, in a very real sense, Wheeler was right. There is no need for anything other than the single electron FSM. All of the electrons that we experience are just instances and follow exactly the same rules. Anything else would be inefficient, and ATTI is the ultimate in efficiency.

 

Dolly, Jaws, and Braces – The Latest Mandela Effect

Well, the universe is at it again, messing with our minds. Last year, I wrote a blog about the Berenstein Bears, which at that time was the most recent example of a Mandela Effect. The Mandela Effect seems to be the de facto name for the idea that something that many people remember from the past is somehow changed, or rewritten. It was named for former president of South Africa, Nelson Mandela, whom many people recall having died in a South African prison, which, history now tells us, is untrue. He died, according to all of the historical artifacts in our reality, of natural causes at the ripe old age of 95. I personally have a vague recollection of hearing some news about his demise in prison, but I can’t really place it.

That’s the thing about memories; they are completely fallible. When one remembers something, according to research, one is not remembering the original event, but rather the last time that you recalled that particular memory. As such, memories are subject to the “whisper down the lane” syndrome of changing slightly with every recollection. So, my vague Mandela recollection could easily have morphed from a confluence of news reports and “Mandela Effect” claims that I have heard over the years.

However, that does not at all explain why large numbers of people would have the same memory of something entirely fallacious. Which brings me back to the latest of this genre of anomalies: Did Dolly Have Braces?

The 1979 James Bond film Moonraker featured a character named Jaws, a huge henchman with metal teeth played by the late Richard Kiel. In one scene, Jaws’ Brazilian cable car crashes and he is helped out of the rubble by Dolly, a bespectacled young blonde woman played by the French actress Blanche Ravalec. There is one of those movie moments that any Bond aficionado will recall, when Jaws first looks at Dolly and grins, bearing his mouthful of metal. She looks at him and grins, showing her mouthful of metal braces, and therefore, as the music swells, they fall instantly in love and walk off hand in hand. At least that’s the way we all remember it, myself included. The only problem is that if you watch the scene today, Dolly has no braces!

jaws3  dollynobraces

Those 70s era Bond movies were full of campy moments like this one. It was done to make the audience chuckle – in this case: “ahhh, despite their drastically different looks, they fall in love with each other, because of the braces connection” – and everyone laughs. That was the entire point. But now, the scene simply doesn’t even make sense any more. This is actually a key difference from the Berenstein Bears (I refuse to spell it any other way) Mandela effect. In that one, there was no real corroborating evidence that it ever was “Berenstein” with the exception of all of our fallible memories. In contrast, the Dolly, Jaws, and Braces scenario does have separate corroborating evidence that it was once as we remember it – the very point of the scene itself. In addition, I dug out a 2014 BBC obituary of Richard Kiel that references the movie describing Dolly as “a small, pig-tailed blonde with braces.” I’m sure the BBC checks their facts fairly carefully and wouldn’t typically be subject to mass delusion. Also, someone on Reddit managed to find an image somewhere where Dolly still appears to have braces, but you have to look closely:

dollywithbraces

So, here, it seems, the universe (ATTI, all that there is) is really messing with us, and didn’t even bother to clean up all of the artifacts.

First, a quick comment on the word “universe” – the underlying “real” universe is what i call ATTI (all that there is) to distinguish it from the physical universe that we know and love, but which is actually virtual. This virtual world is all a subjective experience of our true consciousness, which sits somewhere as part of ATTI. Hence ATTI can modify our virtual world, as could another conscious entity within ATTI (who perhaps has an evolved level of access). I’m not sure which of these is messing with the historical artifacts, but either is very possible. It would be analogous to being a programmer of a multi-player virtual reality fantasy game, and deciding to go back into the game and replace all of the pine trees with palm trees. The players would certainly notice, but they would think that there was a patch applied to the game for some reason and wouldn’t really give it a second though because they realize the game is virtual. The only reason the Mandela effect freaks us out when we discover one, like Dolly’s braces, is because we don’t realize our reality is virtual.

As I post this, it feels like I am documenting something significant.  However, I realize that tomorrow, this post may be gone.  Or perhaps the references that I listed to Dolly with braces will have disappeared, and along with them, the original sources.  And closed-minded science snobs like Bill Nye and Neil deGrasse Tyson will say it always was that way.

Note: I sometimes make a few changes to these blog posts when I realize that I can be more clear about something.  So if you notice something different the second time you read it, it probably isn’t because of the Mandela effect (but it could be 🙂 ).  Also, for those who haven’t read my original blog on this effect, I will repeat the explanation for Dolly, courtesy of digital consciousness theory:

The flaw is in the assumption that “we” are all in the same reality. “We,” as has been discussed countless times in this blog and in my book, are experiencing a purely subjective experience. It is the high degree of consensus between each of us “conscious entities” that fools us into thinking that our reality is objective and deterministic. Physics experiments have proven beyond a reasonable doubt that it is not.

So what is going on?

My own theory, Digital Consciousness (fka “Programmed Reality”), has a much better, comprehensive, and perfectly consistent explanation (note: this has the same foundation as Tom Campbell’s theory, “My Big TOE”). See the figure below.

ATTI

“We” are each a segment of organized information in “all that there is” (ATTI). Hence, we feel individual, but are connected to the whole. (No time to dive into how perfectly this syncs with virtually every spiritual experience throughout history, but you probably get it.) The “Reality Learning Lab” (RLL) (Campbell) is a different set of organized information within ATTI. The RLL is what we experience every day while conscious. (While meditating, or in deep sleep, we are connected elsewhere) It is where all of the artifacts representing Jaws and Dolly exist. It is where various “simulation” timelines run. The information that represents our memories is in three places:

  1. The “brain” part of the simulation. Think of this as our cache.
  2. The temporary part of our soul’s record (or use the term “spirit”, “essence”, “consciousness”, “Being”, or whatever you prefer – words don’t matter), which we lose when we die. This is the stuff our “brain” has full access to, especially when our minds are quiet.
  3. The permanent part of our soul’s record; what we retain from life to life, what we are here to evolve and improve, what in turn contributes to the inexorable evolution of ATTI. Values and morality are here. Irrelevant details like whether or not Dolly had braces don’t belong.

For some reason, ATTI decided that it made sense to remove Dolly’s braces in all of the artifacts of our reality (DVDs, YouTube clips, etc.) But, for some reason, the consciousness data stores did not get rewritten when that happened, and so we still have long-term recollection of Dolly with braces.

Why? ATTI just messing with us? Random experiment? Glitch?

Maybe ATTI is giving us subtle hints that it exists, that “we” are permanent, so that we use the information to correct our path?

We can’t know. ATTI is way beyond our comprehension.

Collapsing the Objective Collapse Theory

When I was a kid, I liked to collect things – coins, baseball cards, leaves, 45s, what have you. What made the category of collectible particularly enjoyable was the size and variety of the sample space. In my adult years, I’ve learned that collections have a downside – where to put everything? – especially as I continue to downsize my living space in trade for more fun locales, greater views, and better access to beaches, mountains, and wine bars. However, I do still sometimes maintain a collection, such as my collection of other people’s theories that attempt to explain quantum mechanics anomalies without letting go of objective materialism. Yeah, I know, not the most mainstream of collections, and certainly nothing I can sell on eBay, but way more fun than stamps.

The latest in this collection is a set of theories called “objective collapse” theories. These theories try to distance themselves from the ickyness (to materialists) of conscious observer-centric theories like the Copenhagen interpretation of quantum mechanics. They also attempt to avoid the ridiculousness of the exponentially explosive reality creation theories in the Many Worlds Interpretations (MWI) category. Essentially, the Objective Collapsers argue that there is a wave function describing the probabilities of properties of objects, but, rather than collapsing due to a measurement or a conscious observation, it collapses on its own due to some as yet undetermined, yet deterministic, process according to probabilities of the wave function.

Huh?

Yeah, I call BS on that. And point simply to the verification of the Quantum Zeno effect.  Particles don’t change state while they are under observation. When you stop observing them, then they change state, not at some random time prior, as the Objective Collapse theories would imply, but at the exact time that you stop observing them. In other words, the timing of the observation is correlated with wave function collapse, completely undermining the argument that it is probabilistic or deterministic according to some hidden variables. Other better-physics-educated individuals than I (aka physicists) have also called BS on Objective Collapse theories due to other things such as the conservation of energy violations. But, of course there is no shortage of physicists calling BS on other physicists’ theories. That, by itself, would make an entertaining collection.

In any case, I would be remiss if I didn’t remind the readers that the Digital Consciousness Theory completely explains all of this stuff. By “stuff,” I mean not just the anomalies, like the quantum zeno effect, entanglement, macroscopic coherence, the observer effect, and quantum retrocausality, but also the debates about microscopic vs. macroscopic, and thought experiments like the time that Einstein asked Abraham Pais whether he really believed that the moon existed only when looked at, to wit:

  • All we can know for sure is what we experience, which is subjective for every individual.
  • We effectively live in a virtual reality, operating in the context of a huge and highly complex digital substrate system. The purpose of this reality is for our individual consciousnesses to learn and evolve and contribute to the greater all-encompassing consciousness.
  • The reason that it feels “physical” or solid and not virtual is due to the consensus of experience that is built into the system.
  • This virtual reality is influenced and/or created by the conscious entities that occupy it (or “live in it” or “play in it”; chose your metaphor)
  • The virtual reality may have started prior to any virtual life developing, or it may have been suddenly spawned and initiated with us avatars representing the various life forms at any point in the past.
  • Some things in the reality need to be there to start; the universe, earth, water, air, and, in the case of the more recent invocation of reality, lots of other stuff. These things may easily be represented in a macroscopic way, because that is all that is needed in the system for the experience. Therefore, there is no need for us to create them.
  • However, other things are not necessary for our high level experience. But they are necessary once we probe the nature of reality, or if we aim to influence our reality. These are the things that are subject to the observer effect. They don’t exist until needed. Subatomic particles and their properties are perfect examples. As are the deep cause and effect relationships between reality elements that are necessary to create the changes that our intent is invoked to bring about.

So there is no need for objective collapse. Things are either fixed (the moon) or potential (the radioactive decay of a particle). The latter are called into existence as needed…

…Maybe

cat

Comments on the Possibilist Transactional Interpretation of Quantum Mechanics, aka Models vs. Reality

Reality is what it is. Everything else is just a model.

From Plato to Einstein to random humans like myself, we are all trying to figure out what makes this world tick. Sometimes I think I get it pretty well, but I know that I am still a product of my times, and therefore my view of reality is seen through the lens of today’s technology and state of scientific advancement. As such, I would be a fool to think that I have it all figured out. As should everyone else.

At one point in our recent past, human scientific endeavor wasn’t so humble. Just a couple hundred years ago, we thought that atoms were the ultimate building blocks of reality and everything could be ultimately described by equations of mechanics. How naïve that was, as 20th century physics made abundantly clear. But even then, the atom-centric view of physics was not reality. It was simply a model. So is every single theory and equation that we use today, regardless of whether it is called a theory or a law: Relativistic motion, Schrodinger’s equation, String Theory, the 2nd Law of Thermodynamics – all models of some aspect of reality.

We seek to understand our world and derive experiments that push forward that knowledge. As a result of the experiments, we define models to best fit the data.

One of the latest comes from quantum physicist Ruth Kastner in the form of a model that better explains the anomalies of quantum mechanics. She calls the model the Possibilist Transactional Interpretation of Quantum Mechanics (PTI), an updated version of John Cramer’s Transactional Interpretation of Quantum Mechanics (TIQM, or TI for short) proposed in 1986. The transactional nature of the theory comes from the idea that the wavefunction collapse behaves like a transaction in that there is an “offer” from an “emitter” and a “confirmation” from an “absorber.” In the PTI enhancement, the offers and confirmations are considered to be outside of normal spacetime and therefore the wavefunction collapse creates spacetime rather than occurs within it. Apparently, this helps to explain some existing anomalies, like uncertainty and entanglement.

This is all cool and seems to serve to enhance our understanding of how QM works. However, it is STILL just a model, and a fairly high level one at that. And all models are approximations, approximating a description of reality that most closely matches experimental evidence.

Underneath all models exist deeper models (e.g. string theory), many as yet to be supported by real evidence. Underneath those models may exist even deeper models. Consider this layering…

Screen Shot 2015-09-29 at 8.18.55 PM

Every layer contains models that may be considered to be progressively closer to reality. Each layer can explain the layer above it. But it isn’t until you get to the bottom layer that you can say you’ve hit reality. I’ve identified that layer as “digital consciousness”, the working title for my next book. It may also turn out to be a model, but it feels like it is distinctly different from the other layers in that, by itself, it is no longer an approximation of reality, but rather a complete and comprehensive yet elegantly simple framework that can be used to describe every single aspect of reality.

For example, in Digital Consciousness, everything is information. The “offer” is then “the need to collapse the wave function based on the logic that there is now an existing conscious observer who depends on it.” The “confirmation” is the collapse – the decision made from probability space that defines positions, spins, etc. This could also be seen as the next state of the state machine that defines such behavior. The emitter and absorber are both parts of the “system”, the global consciousness that is “all that there is.” So, if experimental evidence ultimately demonstrates that PTI is a more accurate interpretation of QM, it will nonetheless still be a model and an approximation. The bottom layer is where the truth is.

Elvidge’s Postulate of Countable Interpretations of QM…

The number of intepretations of Quantum Mechanics always exceeds the number of physicists.

Let’s count the various “interpretations” of quantum mechanics:

  • Bohm (aka Causal, or Pilot-wave)
  • Copenhagen
  • Cosmological
  • Ensemble
  • Ghirardi-Rimini-Weber
  • Hidden measurements
  • Many-minds
  • Many-worlds (aka Everett)
  • Penrose
  • Possibilist Transactional (PTI)
  • Relational (RQM)
  • Stochastic
  • Transactional (TIQM)
  • Von Neumann-Wigner
  • Digital Consciousness (DCI, aka Elvidge)

Unfortunately you won’t find the last one in Wikipedia. Give it about 30 years.

istock_000055801128_small-7dde4d0d485dcfc0d5fe4ab9e600bfef080121d0-s800-c85

Who Is God?

I’m starting this ridiculously presumptuous topic with the assumption that we live in a consciousness-driven digital reality. (For the reasons that I think this is the ONLY compelling theory of reality, please see the evidence, or my book, “The Universe – Solved!”) As such, we can draw from the possibilities proposed by various simulation theorists, such as Tom Campbell, Nick Bostrom, Andrei Linde, the Wachowskis, and others. In all cases, our apparent self, what Morpheus called “residual self image” is simply, in effect, an avatar. Our real free-will-wielding consciousness is in the mind of the “sim player”, wherever it may be.

god1-100 god2-100 god3-100

Some possibilities…

  1. We live in a post-human simulation written by humans of the future. This is Nick Bostrom’s “Simulation Argument.” “God” is thus, effectively, a future human, maybe some sniveling teen hacker working at the 2050 equivalent of Blizzard Entertainment. We are contemporaries of the hacker.
  1. We live in a simulation created by an AI, a la “The Matrix.” God is the Architect of the Matrix; we may be slaves or we may just enjoy playing the simulation that the AI created. We may be on earth or somewhere entirely different.
  1. We live in a simulation created by an alien. God is the alien; again, we may be slaves or we may just enjoy playing the simulation that ET has created.
  1. Stanford physicist Andrei Linde, the developer of the “eternal chaotic inflation theory” of the multiverse, once said “On the evidence, our universe was created not by a divine being, but by a physicist hacker.” That would make God a physicist – a future human one, or one from another planet.
  1. We live in a digital system, which continuously evolves to a higher level due to a fundamental law of continuous improvement. Physicist Tom Campbell has done the most to develop this theory, which holds that each of our consciousnesses are “individuated” parts of the whole system, interacting with another component of the system, the reality simulation in which we “live.” God is then a dispassionate digital information system, all that there is, the creator of our reality and of us. We are effectively a part of God.

The kingdom of God is within you” – Jesus

“He who knows his own self, knows God” – Mohammed

“There is one Supreme Ruler, the inmost Self of all beings, who makes His one form manifold. Eternal happiness belongs to the wise, who perceive Him within themselves – not to others” – from the Vedas, original Indian holy text

“The first peace, which is most important, is that which comes within the souls of men when they realize their relationship, their oneness, with the universe and all its Powers, and when they realize that at the center of the universe dwells Wakan-Tanka, and that this center is really everywhere, it is within each of us.” – Native American

There are a couple major challenges with possibilities 1 through 4. First of all is the problem of motivation. Would a significantly advanced civilization really be interested in playing out a seemingly mundane existence in a pre-post-human epoch on an ordinary planet? Would we want to live out the entire life of an Australopithecus four million years ago, given the opportunity in a simulation? Of course, this argument anthropomorphizes our true self, which may not even be of human form, like its avatar. In the System model of God, however, motivation is simple; it is part of the fundamental process of continuous improvement. We experience the simulation, or “Reality Learning Lab”, as Campbell calls it, in order to learn and evolve.

The bigger challenge is how to explain these anomalies:

  • Near Death Experiences, many of which have common themes; tunnels toward a white light, interaction with deceased (only!) relatives, life reviews, peace and quiet in an unearthly environment, a perception of a point of no return, and fundamental and lasting change in the experiencer’s attitude about life and death.
  • Past Life Experiences, as recounted by patients of hypnotherapists. Roots of reincarnation beliefs exist in every religion throughout the globe. It is fundamental in Hinduism, Jainism, Buddhism, Sikhism, and many Native American nations and African tribes, as well as some of the more esoteric (some might say “spiritually pure”) sects of Islam (Druze, Ghulat, Sufism), Judaism (Kabbalah and Hasidic), and even Christianity (Cathars, Gnostics).
  • In-between Life Experiences, as recounted by patients of hypnotherapists, as well as historical prophet figures, and modern spiritualists, such as Edgar Cayce, have common themes, such as encountering spirit guides who help design the next life.
  • Mystical experiences have been reported in many cultures throughout history, from Mohammed, Moses, Jesus, and Buddha to Protestant leader Jacob Boehme to modern day astronaut Rusty Schweickart. Common experiences include the expansion of consciousness beyond the body and ego, timelessness, the perception of being part of a unified whole, a oneness with a “cosmic consciousness”, and a deep understanding of the universe.

Only possibility 5, the “System” concept, can incorporate all of these anomalies. In that model, we are part of the whole, as experienced. We do reincarnate, as experienced. NDEs are simply the experience of our consciousness detaching from the Reality Learning Lab (RLL), and interacting with non-RLL entities.

The problem with the word “God” is the imagery and assumptions that it conjures up; old man with a flowing beard in the clouds. With the variety of simulation models, “God” could also be an incredibly advanced piece of software, or an incredibly advance alien (“light being”?), or a human in a quasi-futuristic grey suit. The word “System”, while probably much more accurate, is equally problematic in the assumptions that it generates. Still, I prefer that, or “All that there is” (ATTI?).

The System model clearly wins, in terms of its explanatory power. Which makes God a very different entity than most of us are used to thinking about.

But I bet the Buddha, Jesus, and Mohammed would all love this theory!

Which came first, the digital chicken, or the digital philosophy egg?

Many scientists, mathematicians, futurists, and philosophers are embracing the idea that our reality is digital these days. In fact, it would be perfectly understandable to wonder if digital philosophy itself is tainted due to the tendency of humans to view ideas through the lens of their times. We live in a digital age, surrounded by computers, the Internet, and smart phones, and so might we not be guilty of imagining that the world behaves just as a multi-player video game does? We probably wouldn’t have had such ideas 50 years ago, when, at a macroscopic level at least, everything with which we interacted appeared analog and continuous. Which came first, the digital chicken, or the digital philosophy egg?

Actually, the concepts of binary and digital are not at all new. The I Ching is an ancient Chinese text that dates to 1150 BCE. In it are 64 combinations of 8 trigrams (aka the Bagua), each of which clearly contain the first three bits of a binary code. 547px-Bagua-name-earlier.svg

Many other cultures, including the Mangareva in Polynesia (1450), and Indian (5th to 2nd century BCE), have used binary encodings for communication for thousands of years. Over 12,000 years ago, African tribes developed a binary divination system called Odu Ifa.

German mathematician and philosopher Gottfried Leibniz is generally credited as developing the modern binary number system in 1679, based on zeros and ones. Naturally, all of these other cultures are ignored so that we can maintain the illusion that all great philosophical and mathematical thought originated in Europe. Regardless of Eurocentric biases, it is clear that binary encoding is not a new concept. But what about applying it to the fundamental construct of reality?

It turns out that while modern digital physics or digital philosophy references are replete with sources that only date to the mid-20th century, the ancient Greeks (namely Plato) believed that reality was discrete. Atoms were considered to be discrete and fundamental components of reality.

A quick clarification of the terms “discrete”, “digital”, “binary”, “analog”, and “continuous” is probably in order:

Discrete – Having distinct points of measurement in the time domain

Digital – Having properties that can be encoded into bits

Binary – Encoding that is done with only two digits, zeros and ones

Analog – Having continuously variable properties

Continuous – The time domain is continuous

So, for example, if we encode the value of some property (e.g. length or voltage) digitally using 3 values (0, 1, 2), that would be digital, but not binary (rather, ternery). If we say that between any two points in time, there is an infinitely divisible time element, but for each point, the value of the measurement being performed on some property is represented by bits, then we would have a continuous yet digital system. Conversely, if time can be broken into chunks such that at a fine enough temporal granularity there is no concept of time between two adjacent points in time, but at each of these time points, the value of the measurement being performed is continuously variable, then we would have a discrete yet analog system.

In the realm of consciousness-driven digital philosophy, it is my contention that the evidence strongly supports reality being discrete and digital; that is, time moves on in “chunks” and at each discrete point in time, every property of everything can be perfectly represented digitally. There are no infinities.

I believe that this is a logical and fundamental conclusion, regardless of the fact that we live in a digital age. There are many reasons for this, but for the purposes of this particular blog post, I shall only concentrate on a couple. Let’s break down the possibilities of our reality, in terms of origin and behavior:

  1. Type 1 – Our reality was created by some conscious entity and has been following the original rules established by that entity. Of course, we could spend a lifetime defining “conscious” or “entity” but let’s try to keep it simple. This scenario could include traditional religious origin theories (e.g. God created the heavens and the earth). It could also include the common simulation scenarios, a la Nick Bostrom’s “Simulation Argument.”
  1. Type 2 – Our reality was originally created by some conscious entity and has been evolving according to some sort of fundamental evolutionary law ever since.
  1. Type 3 – Our reality was not created by some conscious entity, and its existence sprang out of nothing and has been following primordial rules of physics ever since. To explain the fact that our universe is incredibly finely-tuned for matter and life, materialist cosmologists dreamt up the idea that we must exist in an infinite set of parallel universes, and via the anthropic principle, the one we live only appears finely-tuned because it has to in order for us to be in it. Occam would be turning over in his grave.
  1. Type 4 – Our reality was not created by some particular conscious entity, but rather has been evolving according to some sort of fundamental evolutionary law from the very beginning.

I would argue that in the first two cases, reality would have to be digital. For, if a conscious entity is going to create a world for us to live in and experience, that conscious entity is clearly highly evolved compared to us. And, being so evolved, it would certainly make use of the most efficient means to create a reality. A continuous reality is not only inefficient, it is theoretically impossible to create because it involves infinities in the temporal domain as well as any spatial domain or property.

pixelated200I would also argue that in the fourth case, reality would have to be digital for similar reasons. Even without a conscious entity as a creator, the fundamental evolutionary law would certainly favor a perfectly functional reality that doesn’t require infinite resources.

Only in the third case above, would there be any possibility of a continuous analog reality. Even then, it is not required. As MIT cosmologist and mathematician Max Tegmark succinctly put it, “We’ve never measured anything in physics to more than about sixteen significant digits, and no experiment has been carried out whose outcome depends on the hypothesis that a true continuum exists, or hinges on nature computing something uncomputable.” Hence there is no reason to assume, a priori, that the world is continuous. In fact, the evidence points to the contrary:

  • Infinite resolution would imply that matter implodes into black holes at sub-Planck scales and we don’t observe that.
  • Infinite resolution implies that relativity and quantum mechanics can’t coexist, at least with the best physics that we have today. Our favorite contenders for rationalizing relativity and quantum mechanics are string theory and loop quantum gravity. And they only work with minimal length (aka discrete) scales.
  • We actually observe discrete behavior in quantum mechanics. For example, a particle’s spin value is always quantized; there are no intermediate states. This is anomalous in continuous space-time.

For many other reasons, as are probably clear from the evidence compiled on this site, I tend to favor reality Type 4. No other type of reality structure and origin can be shown to be anywhere near as consistent with all of the evidence (philosophical, cosmological, mathematical, metaphysical, and experimental). And it has nothing to do with MMORPGs or the smart phone in my pocket.

Quantum Zeno Effect Solved

Lurking amidst the mass chaos of information that exists in our reality is a little gem of a concept called the Quantum Zeno Effect.  It is partially named after ancient Greek philosopher Zeno of Elea, who dreamed up a number of paradoxes about the fluidity of motion and change.  For example, the “Arrow Paradox” explores the idea that if you break down time into “instants” of zero duration, motion cannot be observed.  Thus, since time is composed of a set of instants, motion doesn’t truly exist.  We might consider Zeno to have been far ahead of his time as he appeared to be thinking about discrete systems and challenging the continuity of space and time a couple thousand years before Alan Turing resurrected the idea in relation to quantum mechanics: “It is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, one second, tends to one as N tends to infinity; that is, that continual observations will prevent motion …”.  The term “Quantum Zeno Effect” was first used by physicists George Sudarshan and Baidyanath Misra in 1977 to describe just such a system – one that does not change state because it is continuously observed.

The challenge with this theory has been in devising experiments that can verify or falsify it.  However, technology has caught up to philosophy and, over the last 25 years, a number of experiments have been performed which seem to validate the effect.  In 2001, for example, physicist Mark Raizen and a team at the University of Texas showed that the effect is indeed real and the transition of states in a system can be either slowed down or sped up simply by taking measurements of the system.

I have enjoyed making a hobby of fully explaining quantum mechanics anomalies with the programmed reality theory.   Admittedly, I don’t always fully grasp some of the deep complexities and nuances of the issues that I am tackling, due partly to the fact that I have a full time job that has naught to do with this stuff, and partly to the fact that my math skills are a bit rusty, but thus far, it doesn’t seem to make a difference.  The more I dig in to each issue, the more I find things that simply support the idea that we live in a digital (and programmed) reality.

The quantum Zeno effect might not be observed in every case.  It only works for non-memoryless processes.  Exponential decay, for instance, is an example of a memoryless system.  Frequent observation of a particle undergoing radioactive decay would not affect the result.  [As an aside, I find it very interesting that a “memoryless system” invokes the idea of a programmatic construct.  Perhaps with good reason…]

A system with memory, or “state”, however, is, in theory, subject to the quantum Zeno effect.  It will manifest itself by appearing to reset the experiment clock every time an observation is made of the state of the system.  The system under test will have a characteristic set of changes that vary over time.  In the case of the University of Texas experiment, trapped ions tended to remain in their initial state for a brief interval or so before beginning to change state via quantum tunneling, according to some probability function.  For the sake of developing a clear illustration, let’s imagine a process whereby a particle remains in its initial quantum state (let’s call it State A) for 2 seconds before probabilistically decaying to its final state (B) according to a linear function over the next second.  Figure A shows the probability of finding the particle in State A as a function of time.  For the first 2 seconds, of course, it has a 0% probability of changing state, and between 2 and 3 seconds it has an equal probability of moving to state B at any point in time.  A system with this behavior, left on its own and measured at any point after 3 seconds, will be in State B.

probability

What happens, however, when you make a measurement of that system, to check and see if it changed state, at t=1 second?  Per the quantum Zeno effect, the experiment clock will effectively be reset and now the system will stay in State A from t=1 to t=3 and then move to state B at some point between t=3 and t=4.  If you make another measurement of the system at t=1, the clock will again reset, delaying the behavior by another second.  In fact, if you continue to measure the state of the system every second, it will never change state.  Note that this has absolutely nothing to do with the physical impact of the measurement itself; a 100% non-intrusive observation will have exactly the same result.

Also note that, it isn’t that the clock doesn’t reset for a memoryless system, but rather, that it doesn’t matter because you cannot observe any difference.  One may argue that if you make observations at the Planck frequency (one per jiffy), even a memoryless sytem might never change state.  This actually approaches the true nature of Zeno’s arguments, but that is a topic for another essay, one that is much more philosophical than falsifiable.  In fact, “Quantum Zeno Effect” is a misnomer.  The non-memoryless system described above really has little to do with the ad infinitum inspection of Zeno’s paradoxes, but we are stuck with the name.  And I digress.

So why would this happen?

It appears to be related in some way to the observer effect and to entanglement:

  • Observer Effect – Once observed, the state of a system changes.
  • Entanglement – Once observed, the states of multiple particles (or, rather, the state of a system of multiple particles) are forever connected.
  • Quantum Zeno – Once observed, the state of a system is reset.

What is common to all three of these apparent quantum anomalies is the coupling of the act of observation with the concept of a state.  For the purposes of this discussion, it will be useful to invoke the computational concept of a finite state machine, which is a system that changes state according to a set of logic rules and some input criteria.

I have explained the Observer effect and Entanglement as logical necessities of an efficient programmed reality system.  What about Quantum Zeno?  Why would it not be just as efficient to start the clock on a process and let it run, independent of observation?

A clue to the answer is that the act of observation appears to create something.

In the Observer effect, it creates the collapse of the probability wave functions and the establishment of definitive properties of certain aspects of the system under observation (e.g. position).  This is not so much a matter of efficiency as it is of necessity, because without probability, free will doesn’t exist and without free will, we can’t learn, and if the purpose of our system is to grow and evolve, then by necessity, observation must collapse probability.

In Entanglement, the act of observation may create the initiation of a state machine, which subsequently determines the behavior of the particles under test.  Those particles are just data, as I have shown, and the data elements are part of the same variable space of the state machine.  They both get updated simultaneously, regardless of the “virtual” distance between them.

So, in Quantum Zeno, the system under test is in probability space.  The act of observation “collapses” this initial probability function and kicks off the mathematical process by which futures states are determined based on the programmed probability function.  But that is now a second level of probability function; call it probability function 2.  Observing this system a second time now must collapse the probability wave function 2.  But to do so means that the system would now have to calculate a modified probability function 3 going forward – one that takes into account the fact that some aspect of the state machine has already been determined (e.g. the system has or hasn’t started its decay).  For non-memoryless systems, this could be an arbitrarily complex function (3) since it may take a different shape for every time at which the observation occurs.  A third measurement complicates the function even further because even more states are ruled out.

On the other hand, it would be more efficient to simply reset the probability function each time an observation is made, due to the efficiency of the reality system.

The only drawback to this algorithm is the fact that smart scientists are starting to notice these little anomalies, although the assumption here is that the reality system “cares.”  It may not.  Or perhaps that is why most natural processes are exponential, or memoryless – it is a further efficiency of the system.  Man-made experiments, however, don’t follow the natural process and may be designed to be arbitrarily complex, which ironically serves to give us this tiny little glimpse into the true nature of reality.

What we are doing here is inferring deep truths about our reality that are in fundamental conflict with the standard materialist view.  This will be happening more and more as time goes forward and physicists and philosophers will soon have no choice but to consider programmed reality as their ToE.

matrix-stoppingreality

Flexi Matter

Earlier this year, a team of scientists at the Max Planck Institute of Quantum Optics, led by Randolf Pohl, made a highly accurate calculation of the diameter of a proton and, at .841 fm, it turned out to be 4% less than previously determined (.877 fm).  Trouble is, the previous measurements were also highly accurate.  The significant difference between the two types of measurement was the choice of interaction particle: in the traditional case, electrons, and in Pohl’s case, muons.

Figures have been checked and rechecked and both types of measurements are solid.  All sorts of crazy explanations have been offered up for the discrepancy, but one thing seems certain: we they don’t really understand matter.

Ancient Greeks thought that atoms were indivisible (hence, the name), at least until Rutherford showed otherwise in the early 1900s.  Ancient 20th-century scientists thought that protons were indivisible, at least until Gell-Mann showed otherwise in the 1960s.

So why would it be such a surprise that the diameter of a proton varies with the type of lepton cloud that surrounds and passes through it?  Maybe the proton is flexible, like a sponge, and a muon, at 200 times the weight of an electron, exerts a much higher contractive force on it – gravity, strong nuclear, Jedi, or what have you.  Just make the measurements and modify your theory, guys.  You’ll be .000001% closer to the truth, enough to warrant an even bigger publicly funded particle accelerator.

If particle sizes and masses aren’t invariant, who is to say that they don’t change over time.  Cosmologist Christof Wetterich of the University of Heidelberg thinks this might be possible.  In fact, says Wetterich, if particles are slowly increasing in size, the universe may not be expanding after all.  His recent paper suggests that spectral red shift, Hubble’s famous discovery at Mount Wilson, that led the most widely accepted theory of the universe – the big bang, may actually be due to changing particle sizes over time.  So far, no one has been able to shoot a hole in his theory.

Oops.  “Remember what we said about the big bang being a FACT?  Never mind.”

Flexi-particles.  Now there is both evidence and major philosophical repercussions.

And still, The Universe – Solved! predicts there is no stuff.

The ultimate in flexibility is pure data.

data200