How to Survive an AI Apocalypse – Part 9: The Stabilization Effect

PREVIOUS: How to Survive an AI Apocalypse – Part 8: Fighting Back

Here’s where it gets fun.

Or goes off the rails, depending on your point of view.

AI meets Digital Philosophy meets Quantum Mechanics meets UFOs.

This entire blog series has been about surviving an AI-based Apocalypse, a very doomsday kind of event. For some experts, this is all but inevitable. You readers may be coming to a similar conclusion.

But haven’t we heard this before? Doomsday prophesies have been around as long as… Keith Richards. The Norse Ragnarök, The Hindu prophecy of the end of times during the current age of Kaliyuga, the Zoroastrian Renovation, and of course, the Christian Armageddon. An ancient Assyrian tablet dated 2800-2500 BCE tells of corruption and unruly teenagers and prophecies that “earth is in its final days; the world is slowly deteriorating into a corrupt society that will only end with its destruction.” Fast forward to the modern era, where the Industrial Revolution was going to lead to the world’s destruction. We have since had the energy crisis, the population crisis, and the doomsday clock ticking down to nuclear armageddon. None of it ever comes to pass.

Is the AI apocalypse more of the same, or is it frighteningly different in some way? This Part 9 of the series will examine such questions and present a startling conclusion that all may be well.

THE NUCLEAR APOCALYPSE

To get a handle on the likelihood of catastrophic end times, let’s take a deep dive into the the specter of a nuclear holocaust.

It’s hard for many of us to appreciate what a frightening time it was in the 1950s, as people built fallout shelters and children regularly executed duck and cover drills in the classrooms.

Often considered to be the most dangerous point of the cold war, the 1962 Cuban Missile Crisis was a standoff between the Soviet Union and the United States involving the deployment of Soviet missiles in Cuba. At one point the US Navy began dropping depth charges to force a nuclear-armed Soviet submarine to surface. The crew on the sub, having had no radio communication with the outside world didn’t know if war was breaking out or not. The captain, Valentin Savitsky, wanted to launch a nuclear weapon, but a unanimous decision among the three top officers was required for launch. Vasily Arkhipov, the second in command, was the sole dissenting vote and even got into an argument with the other two officers. His courage effectively prevented the nuclear war that was likely to result. Thomas S Blanton, later the director of the US National Security Archive called Arkhipov “the man who saved the world.”

But that wasn’t the only time we were a hair’s breadth away from the nuclear apocalypse.

On May 23, 1967, US military commanders issued a high alert due to what appeared to be jammed missile detection radars in Alaska, Greenland, and the UK. Considered to be an act of war, they authorized preparations for war, including the deployment of aircraft armed with nuclear weapons. Fortunately, a NORAD solar forecaster identified the reason for the jammed radar – a massive solar storm.

Then, on the other side of the red curtain, on 26 September 1983, with international tensions still high after the recent Soviet military shoot down of Korean Air Lines Flight 007, a nuclear early-warning system in Moscow reported that 5 ICBMs (intercontinental ballistic missiles) had been launched from the US. Lieutenant colonel Stanislav Petrov was the duty officer at the command center and suspected a false alarm, so he awaited confirmation before reporting, thereby disobeying Soviet protocol. He later said that had he not been on the shift at that time, his colleagues would have reported the missile launch, likely triggering a nuclear war.

In fact, over the years there have been at least 21 nuclear war close calls, any of which could easily led to a nuclear conflagration and the destruction of humanity. The following timeline, courtesy of the Future of Life Institute, shows how many occurred in just the 30-year period from 1958 to 1988.

It kinds of makes you wonder what else could go wrong…

END OF SOCIETY PREDICTED

Another modern age apocalyptic fear was driven by the recognition that exponential growth and limited resources are ultimately incompatible. At the time, the world population was growing exponentially and important resources like oil and arable land were being depleted. The Rockefeller Foundation partnered with the OECD (Organization for Economic Cooperation and Development) to form The Club of Rome, a group of current and former heads of state, scientists, economists, and business leaders to discuss the problem and potential solutions. In 1972, with the support of computational modeling from MIT, they issued their first report, The Limits to Growth, which painted a bleak picture of the world’s future. Some of the predictions (and their ultimate outcomes) follow:

Another source for this scare was the book The Population Bomb by Stanford biologist Paul Ehrlich. He and people like Harvard biologist George Wald also made some dire predictions…

There is actually no end to failed environmental apocalyptic predictions – too many to list. But a brief smattering includes:

  • “Unless we are extremely lucky, everyone will disappear in a cloud of blue steam in 20 years.” (New York Times, 1969)
  • “UN official says rising seas to ‘obliterate nations’ by 2000.” (Associated Press, 1989)
  • “Britain will Be Siberian in less than 20 years” (The Guardian, 2004)
  • “Scientist Predicts a New Ice Age by 21th Century” (Boston Globe, 1970)
  • “NASA scientist says we’re toast. In 5-10 years, the arctic will be ice free.” (Associated Press, 2008)

Y2K

And who could forget this apocalyptic gem…

My intent is not to cherry pick the poor predictions and make fun of them. It is simply that when we are swimming in the sea of impending doom, it is really hard to see the way out. And yet, there does always seem to be a way out. 

Sometimes it is mathematical. For example, there was a mathematical determination of when we would run out of oil based on known supply and rate of usage, perhaps factoring in the trend of increase in rate of usage. But what were not factored into the equation were the counter effects of the rate of new reserves being discovered and the improvements in engine efficiencies. One could argue that in the latter case, the scare achieved its purpose, just as the fear of global warming has resulted in a number of new environmental policies and laws, such as California’s upcoming ban on gasoline powered vehicles in 2035. However, that isn’t always the case. Many natural resources, for instance, seem to actually be increasing in supply. I am not necessarily arguing for something like the abiotic oil theory. However, at the macro level, doesn’t it sometimes feel like a game of civilization, where we are given a set of resources, cause and effect interrelationships, and ability to acquire certain skills. In the video game, when we fail on an apocalyptic level, we simply hit the reset button and start over. But in real life we can’t do that. Yet, doesn’t it seem like the “game makers” always hand us a way out, such as unheard of new technologies that are seemingly suddenly enabled? And it isn’t always human ingenuity that saves us? Sometimes, the right person is on duty at the perfect time against all odds. Sometimes, oil fields magically replenish on their own. Sometimes asteroids strike the most remote place on the planet.

THE STABILIZATION EFFECT

In fact, it seems statistically significant that apocalypses, while seemingly imminent, NEVER really occur. So much so that I decided to model it with a spreadsheet using random number generation (also demonstrating how weak my programming skills have gotten). The intent of the model is to encapsulate the state of humanity on a simple timeline using a parameter called “Mood” for lack of a better term. We start at a point in society that is neither euphoric (the Roaring Twenties) nor disastrous (the Great Depression). As time progresses, events occur that push the Mood in one direction or the other, with a 50/50 chance of either occurring. The assumption in this model is that no matter what the Mood is, it can still get better or worse with equal probability. Each of the following graphs depicts a randomly generated timeline.

On the graph are two thresholds – one of a positive nature, where things seemingly can’t get much better, and one of a negative nature, whereby all it should take is a nudge to send us down the path to disaster. In any of the situations we’ve discussed in this part of the series, when we are on the brink of apocalypse, the statistical likelihood that the situation would improve at that point should not be more than 50/50. If true, running a few simulations shows that an apocalypse is actually fairly likely. Figures 1 and 3 pop over the positive limit and then turn back toward neutral. Figure 2 seems to take off in the positive direction even after passing the limit. Figure 4 hits and goes through the negative limit several times, implying that if our reality actually worked this way, apocalyptic situations would actually be likely.

However, what always seems to happen is that when things get that bad, there is a stabilizing force of some sort. I made an adjustment to my reality model by inserting some negative feedback to model this stabilizing effect. For those unfamiliar with the term, complex systems can have positive or negative feedback loops; often both. Negative feedback tends to bring a system back to a stable state. Examples in the body include the maintenance of body temperature and blood sugar levels. If blood sugar gets too high, the pancreas secretes insulin which chemically reduces the level. When it gets too low, the pancreas secretes glucagon which increases the level. In nature, when the temperature gets high, cloud level increases, which provides the negative feedback needed to reduce the temperature. Positive feedback loops also exist in nature. The runaway greenhouse effect is a classic example.

When I applied the negative feedback to the reality model, all curves tended to stay within the positive and negative limits, as show below.

Doesn’t it feel like this is how our reality works at the most fundamental level? But how likely would it be that every aspect of our reality is subject to negative feedback? And where does that negative feedback come from?

REALITY IS ADAPTIVE

This is how I believe that reality works at its most fundamental level…

Why would that be? Two obvious ideas come to mind.

  1. Natural causes – this would be the viewpoint of reductionist materialist scientists. Heat increase causes ice sheets to melt which creates more water vapor, generating more clouds, reducing the heating effect of the sun. But this does not at all explain why the human condition, and the civilization trends that we’ve discussed in this article, always tend toward neutral.
  2. God – this would be the viewpoint of people whose beliefs are firmly grounded in their religion. God is always intervening to prevent catastrophes. But apparently God doesn’t mind minor catastrophes and plenty of pain and suffering in general. More importantly though, this does not explain dynamic reality generation.

DYNAMIC REALITY GENERATION

Enter Quantum Mechanics.

The Double-slit experiment was first done by Thomas Young back in 1801, and was an attempt to determine if light was composed of particles or waves. A beam of light was projected at a screen with two vertical slits. If light was composed of particles, only two bands of light should be on the phosphorescent screen behind the one with the slits. If wave-based, an interference pattern should result. The wave theory was initially confirmed experimentally, but that was later called into question by Einstein and others. 

The experiment was later done with particles, like electrons, and it was clearly assumed that these would be shown to be hard fixed particles, generating the expected pattern shown on the right.

However, what resulted was an interference pattern, implying that the electrons were actually waves. Thinking that perhaps electrons were interfering with each other, the experiment was modified to shoot one electron at a time. And still the interference pattern slowly build up on the back screen.

To make sense of the interference pattern, experimenters wondered if they could determine which slit each electron went through, so they put a detector before the double list. Et voila, the interference pattern disappeared! It was as if the actual conscious act of observation converted the electrons from waves to particles. The common interpretation was that the electrons actual exist only a probability function and the observation actually snaps them into existence.

It is very much like the old adage that a tree falling in the woods makes no sound unless someone is there to see it. Of course, this idea of putting consciousness as a parameter in the equations of physics generated no end of consternation for the deterministic materialists. They have spent the last twenty years designing experiments to disprove this “Observer Effect” to no avail. Even when the “which way” detector is place after the double slit, the interference pattern disappears. The only tenable conclusion is that reality does not exist in an objective manner and its instantiation depends on something. But what?

The diagram below helps us visualize the possibilities. When does reality come into existence?

Clearly it is not at points 1, 2 or 3, because it isn’t until the “which way” detector is installed that we see the shift in reality. So is it due to the detector itself or the conscious observer reading the results of the detector. One could image experiments where the results of the “which way” detector are hidden from the conscious observer for an arbitrary period of time; maybe printed out and put in an envelope without looking, where it sits on the shelf for a day while the interference pattern exists. And someone opens the envelope and suddenly the interference pattern disappears. I have always suspected that the answer will be that reality comes into existence at point 4. I believe that it is just logical that a reality generating universe be efficient. Recent experiments bear this out.

I believe this says something incredibly fundamental about the nature of our reality. But what would efficiency have to do with the nature of reality? Let’s explore a little further – what kinds of efficiencies would this lead to?

POP QUIZ! – is reality analog or digital? There is actually no conclusion to this question and many papers have been written in support of either point of view. But if our reality is created on some sort of underlying construct, there is only one answer – it has to be digital. Here’s why…

How much information would it take to fully describe the cup of coffee on the right?

In an analog reality, it would take an infinite amount of information.

In a digital reality, fully modeled at the Planck resolution (what some people think is the deepest possible digital resolution), it would require 4*1071 bits/second give or take. It’s a huge number for sure, but infinitely less than the analog case.

But wait a minute.  Why would we need that level of information to describe a simple cup of coffee? So let’s ask a different question… How much information is needed for a subjective human experience of that cup of coffee – the smell, the taste, the visual experience. You don’t really need to know the position and momentum vector of each subatomic particle in each molecule of coffee in that cup. All you need to know is what it takes to experience it. The answer is roughly 1*109 bits/second. In other words, there could be as much as a 4*1062 factor of compression involved in generating a subjective experience. In other words, we don’t really need to know where each electron is in the coffee, just as you don’t need to know which slit each electron goes through in the double slit experiment. That is, UNTIL YOU MEASURE IT!

So, the baffling results of the double slit experiments actually make complete sense if reality is:

  • Digital
  • Compressed
  • Dynamically generated to meet the needs of the inhabitants of that reality

Sounds computational doesn’t it? In fact, if reality were a computational system, it would make sense for it to need to have efficiencies at this level. 

There are such systems – one well known example is a video game called No Man’s Sky that dynamically generates its universe as the user plays the game. Art inadvertently imitating life?

Earlier in this article I suggested that the concept of God could explain the stabilization effect of our reality. If we redefine “God” to mean “All That There Is” (of which, our apparent physical reality is only a part), reality becomes a “learning lab” that needs to be stable for our consciousnesses to interact virtually.

I wrote about this and proposed this model back in 2007 in my first book “The Universe-Solved!.”  In 2021, an impressive set of physicists and technologists came up with the same theory, which they called “The Autodidactic Universe.” They collaborated to explore methods, structures, and topologies in which the universe might be learning and modifying its laws according to what is needed. Such ideas included neural nets and Restricted Boltzman Machines. This provides an entirely different way of looking at any potential apocalypse. And it make you wonder…

UFO INTERVENTION

In 2021, over one hundred military personnel, including Retired Air Force Captain Robert Salas, Retired First Lieutenant Robert Jacobs, and Retired Captain David Schindele met at the National Press Club in Washington, DC to present historical case evidence that UFOs have been involved with disarming nuclear missiles. A few examples…

  • Malmstrom Air Force Base, Montana, 1967 – “a large glowing, pulsating red oval-shaped object hovering over the front gate,” as alarms went off showing nearly all 10 missiles shown in the control room had been disabled.
  • Minot Air Force Base, North Dakota, 1966 – Eight airmen said that 10 missiles at silos in the vicinity all went down with guidance and control malfunctions when an 80- to 100-foot wide flying object with bright flashing lights had hovered over the site.
  • Vandenberg Air Force Base, California, 1964 – “It went around the top of the warhead, fired a beam of light down on the top of the warhead.” After circling, it “then flew out the frame the same way it had come in.”
  • Ukraine, 1982 – launch countdowns were activated for 15 seconds while a disc-shaped UFO hovered above the base, according to declassified KGB documents

As the History Channel reported, areas of high UFO activity are correlated with nuclear and military facilities worldwide.

Perhaps UFOs are an artifact of our physical reality learning lab, under the control of some conscious entity or possibly even an autonomous (AI) bot in the system. As part of the “autodidactic” programming mechanisms that maintain stability in our programmed reality. Other mechanisms could involve things like adjusting the availability of certain resources or even nudging consciousnesses toward solutions to problems. If this model of reality is accurate, we may find that we have little to worry about regarding an AI apocalypse. Instead it will just be another force that contributes toward our evolution.

To that end, there is also a sector of thinkers who recommend a different approach. Rather than fight the AI progression, or simply let the chips fall, we should welcome our AI overlords and merge with them. That scenario will be explored in Part 10 of this series.

NEXT: How to Survive an AI Apocalypse – Part 10: If You Can’t Beat ’em, Join ’em

Wigner’s Friend likes Digital Consciousness

Apparently your reality may be different than mine. Wait, what???

Several recent studies have demonstrated to an extremely high degree of certainty that objective reality does not exist. This year, adding to the mounting pile of evidence for a consciousness-centric reality, came the results of an experiment that for the first time, tested the highly paradoxical Wigner’s Friend thought experiment. The conclusion was that your reality and my reality can actually be different. I don’t mean different in the sense that your rods and cones have a different sensitivity, or that your brain interprets things differently, but fundamentally intrinsically different. Ultimately, things may happen in your reality that might not happen in my reality and vice versa.

Almost sounds like a dream, doesn’t it? Or like you and I are playing some kind of virtual reality game and the information stream that is coming into your senses via your headset or whatever is different that the information stream coming into mine.

BINGO! That’s Digital Consciousness in a nutshell.

Eugene Paul Wigner received the Nobel Prize for physics in 1963 for his work on quantum mechanics and the structure of the atom. More importantly perhaps, he, along with Max Planck, Neils Bohr, John Wheeler, Kurt Godel, Erwin Schrodinger, and many other forward thinking scientists and mathematicians, opposed the common materialistic worldview shared by most scientists of his day (not to mention, most scientists of today). As such, he was an inspiration for, and a forerunner of consciousness-centric philosophies, such as my Digital Consciousness, Donald Hoffman’s MUI theory, and Tom Campbell’s My Big TOE.

As if Schrodinger’s Cat wasn’t enough to bend people’s minds, Wigner raised the stakes of quantum weirdness in 1961 when he proposed a thought experiment, referred to as “Wigner’s Friend.” In the scenario are two people, let’s say Wigner and his friend. One of them is in an enclosed space, hidden to the other and observes something like Schrodinger’s cat, further hidden in a box. At the time Wigner opens the box the wave function collapses, establishing whether or not the cat is dead. But the cat is still in superposition to Wigner’s friend, outside of the entire subsystem. Only when he opens the door to see Wigner and the result of the cat experiment, does his wave function collapse. Therefore, Wigner and his friend have differing interpretations of when reality become realized; hence different realities.

Fast forward to 2019, and scientists (Massimiliano Proietti, Alexander Pickston, Francesco Graffitti, Peter Barrow, Dmytro Kundys, Cyril Branciard, Martin Ringbauer, and Alessandro Fedrizzi) at Heriot-Watt University in Edinburgh, were finally able to test the paradox using double slits, lasers, and polarizers. The results confirmed Wigner’s hypothesis to a certainty of 5 standard deviations, which essentially means that objective reality doesn’t exist, and your and my realities can differ- to a certainty of 1 in 3.5 million!

Of course, I live for this stuff, because it simply adds one more piece of supporting evidence to my theory, Digital Consciousness. And it adds yet another nail in the coffin of that ancient scientific religion, materialism.

How does it work?

Digital Consciousness asserts that consciousness is primary; hence, all that we can truly know is what we each experience subjectively.  This experiment doesn’t necessarily prove that the fundamental construct of reality is information, but it is a lot more plausible that individual experiences based on virtual simulations are at the root of this paradox rather than, say, a complex violation of Hilbert space, allowing parallel realities based on traditional physical fields to intermingle.  As an analogy, imagine that you are playing an MMORPG (video game with many other simultaneous players) – it isn’t difficult to see how each individual could be having a slightly different experience, based perhaps on their skill level or something.  As information is the carrier of the experience, the information entering the consciousness of one player could easily be slightly different than the information entering the consciousness of another player. This is by far the simplest explanation, and by Occam’s Razor, supports my theory.

Too bad Wigner isn’t alive to see this experiment, or to ponder Digital Consciousness theory. But I’m sure his consciousness is having a good laugh.

 

Quantum Retrocausality Explained

A recent quantum mechanics experiment, conducted at the University of Queensland in Australia, seems to defy causal order, baffling scientists. In this post however, I’ll explain why this isn’t anomalous at all; at least, if you come to accept the Digital Consciousness Theory (DCT) of reality. It boils down to a virtually identical explanation that I gave seven years ago to Daryl Bem’s seemingly anomalous precognition studies.

DCT says that subatomic particles are controlled by finite state machines (FSMs), which are tiny components of our Reality Learning Lab (RLL, aka “reality”).  These finite state machines that control the behavior of the atoms or photons in the experiment don’t really come into existence until the measurement is made, which effectively means that the atom or photon doesn’t really exist until it needs to. In RLL, the portion of the system that needs to describe the operation of the laser, the prisms, and the mirrors, at least from the perspective of the observer, is defined and running, but only at a macroscopic level. It only needs to show the observer the things that are consistent with the expected performance of those components and the RLL laws of physics. So, for example, we can see the laser beam. But only when we need to determine something at a deeper level, like the path of a particular photon, is a finite state machine for that proton instantiated. And in these retrocausality experiments, like the delayed choice quantum eraser experiments, and this one done in Queensland, the FSMs only start when the observation is made, which is after the photon has gone through the apparatus; hence, it never really had a path. It didn’t need to. The path can be inferred later by measurement, but it is incorrect to think that that inference was objective reality. There was no path, and so there was no real deterministic order of operation.

There are only the attributes of the photon determined at measurement time, when its finite state machine comes into existence. Again, the photon is just data, described by the attributes of the finite state machine, so this makes complete sense. Programmatically, the FSM did not exist before the individuated consciousness required a measurement because it didn’t need to. Therefore, the inference of “which operation came first” is only that – an inference, not a true history.

So what is really going on?  There are at least three options:

1. Evidence is rewritten after the fact.  In other words, after the photons pass through the experimental apparatus, the System goes back and rewrites all records of the results, so as to create the non-causal anomaly.  Those records consist of the experimenters memories, as well as any written or recorded artifacts.  Since the System is in control of all of these items, the complete record of the past can be changed, and no one would ever know.

2. The System selects the operations to match the results, so as to generate the non-causal anomaly.

3. We live in an Observer-created reality and the entire sequence of events is either planned out or influenced by intent, and then just played out by the experimenter and students.

The point is that it requires a computational system to generate such anomalies; not the deterministic materialistic continuous system that mainstream science has taught us that we live in.

Mystery solved, Digital Consciousness style.

New Hints to How our Reality is Created

There is something fascinating going on in the world, hidden deep beneath the noise of Trump, soccer matches, and Game of Thrones. It is an exploration into the nature of reality – what is making the world tick?

To cut to the chase, it appears that our reality is being dynamically generated based on an ultra-sophisticated algorithm that takes into account not just the usual cause/effect context (as materialists believe), and conscious observation and intent (as idealists believe), but also a complex array of reality configuration probabilities so as to be optimally efficient.

Wait, what?

This philosophical journey has its origins in the well-known double slit experiment, originally done by Thomas Young in 1801 to determine that light had wavelike properties. In 1961, the experiment was performed with electrons, which also showed wavelike properties. The experimental setup involved shooting electrons through a screen containing two thin vertical slits. The wave nature of the particles was manifested in the form of an interference pattern on a screen that was placed on the other side of the double slit screen. It was a curious result but confirmed quantum theory. In 1974, the experiment was performed one electron at a time, with the same resulting interference pattern, which showed that it was not the electrons that interfered with each other, but rather a probabilistic spatial distribution function that was followed by the pattern on the screen. Quantum theory predicted that if a detector was placed at each of the slits so as to determine which slit each electron would go through, the interference pattern would disappear and just leave two vertical lines, due to the quantum complementarity principle. This was difficult to create in the lab, but experiments in the 1980s confirmed expectations – that the “which way did the particle go” measurement killed the interference pattern. The mystery was that the mere act of observation seemed to change the results of the experiment.

So, at this point, people who were interested in how the universe works effectively split into two camps, representing two fundamental philosophies that set the foundation for thinking, analysis, hypothesis, and theorizing:

  1. Objective Materialism
  2. Subjective Idealism

A zillion web pages can be found for each category.

The problem is that most scientists, and probably at least 99% of all outspoken science trolls believe in Materialism.  And “believe” is the operative word.  Because there is ZERO proof that Materialism is correct.  Nor is there proof that Idealism is correct.  So, “believe” is all that can be done.  Although, as the massive amount of evidence leans in favor of Idealism, it is fair to say that those believers at least have the scientific method behind them, whereas materialists just have “well gosh, it sure seems like we live in a deterministic world.” What is interesting is that Materialism can be falsified, but I’m not sure that Idealism can be.  The Materialist camp had plenty of theories to explain the paradox of the double slit experiments – alternative interpretations of quantum mechanics, local hidden variables, non-local hidden variables, a variety of loopholes, or simply the notion that the detector took energy from the particles and impacted the results of the experiment (as has been said, when you put a thermometer in a glass of water, you aren’t measuring the temperature of the water, you are measuring the temperature of the water with a thermometer in it.)

Over the years, the double-slit experiment has been progressively refined to the point where most of the materialistic arguments have been eliminated. For example, there is now the delayed choice quantum eraser experiment, which puts the “which way” detectors after the interference screen, making it impossible for the detector to physically interfere with the outcome of the experiment. And, one by one, all of the hidden variable possibilities and loopholes have been disproven. In 2015, several experiments were performed independently that closed all loopholes simultaneously with both photons and electrons. Since all of these various experimental tests over the years have shown that objective realism is false and non-local given the experimenters choices, the only other explanation could be what John Bell called Super-determinism, a universe completely devoid of free will, running like clockwork playing out a fully predetermined script of events. If true, this would bring about the extremely odd result that the universe is set up to ensure that the outcomes of these experiments imply the opposite to how the universe really works. But I digress…

The net result is that Materialism-based theories on reality are being chipped away experiment by experiment.  Those that believe in Materialist dogma are finding themselves being painted into an ever-shrinking philosophical corner. But Idealism-based theories are huge with possibilities, very few of which have been falsified experimentally.

Physicist and fellow digital philosopher, Tom Campbell, has boldly suggested a number of double slit experiments that can probe the nature of reality a little deeper. Tom, like me, believes that consciousness plays a key role in the nature of and creation of our reality. So much so that he believes that the outcome of the double slit experiments is due strictly to the conscious observation of the which-way detector data. In other words, if no human (or “sufficiently conscious” entity) observes the data, the interference pattern should remain. Theoretically, one could save the data to a file, store the file on a disk, hide the disk in a box and the interference pattern would remain on the screen. Open the box a day later and the interference pattern should automatically disappear, effectively rewriting history with the knowledge of the paths of the particles. His ideas have incurred the wrath of the physics trolls, who are quick to point out that regardless of the fact that humans ever read the data, the interference pattern is gone if the detectors record the data. The data can be destroyed, or not even written to a permanent medium, and the interference pattern would be gone. If these claims are true, it does not prove Materialism at all. But it does infer something very interesting.

From this and many many other categories of evidence it is strongly likely that our reality is dynamically being generated. Quantum entanglement, quantum zeno effect, and the observer effect all look very much like artifacts of an efficient system that dynamically creates reality as needed. It is the “as needed” part of this assertion that is most interesting. I shall refer to that which creates reality as “the system.”

Entanglement happens because when a two-particle-generating event occurs, it is efficient to create two particles using the same instance of a finite state machine and, therefore, when it is needed to determine the properties of one, the properties of the other are automatically known, as detailed in my blog post on entanglement. The quantum zeno effect happens because it is more efficient to reset the probability function each time an observation is made, as detailed in my blog post on quantum zeno. And so what about the double slit mystery? To illuminate, see the diagram below.

If the physicists are right, reality comes into existence at point 4 in the diagram. Why would that be? The paths of the particles are apparently not needed for the experience of the conscious observer, but rather to satisfy the consistency of the experiment. The fact that the detector registers the data is enough to create the reality. Perhaps the system “realizes” that it is less efficient to leave hanging experiments all over the place until a human “opens the envelope” than it is to instantiate real electron paths despite the unlikely possibility of data deletion. Makes logical sense to me. But it also indicates a sophisticated awareness of all of the probabilities of how the reality can play out out vis a vis potential human interactions.

The system is really smart.

Disproving the Claim that the LHC Disproves the Existence of Ghosts

Recent articles in dozens of online magazines shout things like: “The LHC Disproves the Existence of Ghosts and the Paranormal.”

To which I respond: LOLOLOLOLOL

There are so many things wrong with this backwards scientific thinking, I almost don’t know where to start.  But here are a few…

1. The word “disproves” doesn’t belong here. It is unscientific at best. Maybe use “evidence against one possible explanation for ghosts” – I can even begin to appreciate that. But if I can demonstrate even one potential mechanism for the paranormal that the LHC couldn’t detect, you cannot use the word “disprove.” And here is one potential mechanism – an unknown force that the LHC can’t explore because its experiments are designed to only measure interactions in the 4 forces physicists are aware of.

The smoking gun is Brian Cox’s statement “If we want some sort of pattern that carries information about our living cells to persist then we must specify precisely what medium carries that pattern and how it interacts with the matter particles out of which our bodies are made. We must, in other words, invent an extension to the Standard Model of Particle Physics that has escaped detection at the Large Hadron Collider. That’s almost inconceivable at the energy scales typical of the particle interactions in our bodies.” So, based on that statement, here are a few more problems…

2. “almost inconceivable” is logically inconsistent with the term “disproves.”

3. “If we want some sort of pattern that carries information about our living cells to persist…” is an invalid assumption. We do not need information about our cells to persist in a traditional physical medium for paranormal effects to have a way to propagate. They can propagate by a non-traditional (unknown) medium, such as an information storage mechanism operating outside of our classically observable means. Imagine telling a couple of scientists just 200 years ago about how people can communicate instantaneously via radio waves. Their response would be “no, that is impossible because our greatest measurement equipment has not revealed any mechanism that allows information to be transmitted in that manner.” Isn’t that the same thing Brian Cox is saying?

4. The underlying assumption is that we live in a materialist reality. Aside from the fact that Quantum Mechanics experiments have disproven this (and yes, I am comfortable using that word), a REAL scientist should allow for the possibility that consciousness is independent of grey matter and create experiments to support or invalidate such hypotheses. One clear possibility is the simulation argument. Out of band signaling is an obvious and easy mechanism for paranormal effects.  Unfortunately, the REAL scientists (such as Anton Zeilinger) are not the ones who get most of the press.

5. “That’s almost inconceivable at the energy scales typical of the particle interactions in our bodies” is also bad logic. It assumes that we fully understand the energy scales typical of the particle interactions in our bodies. If scientific history has shown us anything, it is that there is more that we don’t understand than there is that we do.

lhcghosts

Collapsing the Objective Collapse Theory

When I was a kid, I liked to collect things – coins, baseball cards, leaves, 45s, what have you. What made the category of collectible particularly enjoyable was the size and variety of the sample space. In my adult years, I’ve learned that collections have a downside – where to put everything? – especially as I continue to downsize my living space in trade for more fun locales, greater views, and better access to beaches, mountains, and wine bars. However, I do still sometimes maintain a collection, such as my collection of other people’s theories that attempt to explain quantum mechanics anomalies without letting go of objective materialism. Yeah, I know, not the most mainstream of collections, and certainly nothing I can sell on eBay, but way more fun than stamps.

The latest in this collection is a set of theories called “objective collapse” theories. These theories try to distance themselves from the ickyness (to materialists) of conscious observer-centric theories like the Copenhagen interpretation of quantum mechanics. They also attempt to avoid the ridiculousness of the exponentially explosive reality creation theories in the Many Worlds Interpretations (MWI) category. Essentially, the Objective Collapsers argue that there is a wave function describing the probabilities of properties of objects, but, rather than collapsing due to a measurement or a conscious observation, it collapses on its own due to some as yet undetermined, yet deterministic, process according to probabilities of the wave function.

Huh?

Yeah, I call BS on that. And point simply to the verification of the Quantum Zeno effect.  Particles don’t change state while they are under observation. When you stop observing them, then they change state, not at some random time prior, as the Objective Collapse theories would imply, but at the exact time that you stop observing them. In other words, the timing of the observation is correlated with wave function collapse, completely undermining the argument that it is probabilistic or deterministic according to some hidden variables. Other better-physics-educated individuals than I (aka physicists) have also called BS on Objective Collapse theories due to other things such as the conservation of energy violations. But, of course there is no shortage of physicists calling BS on other physicists’ theories. That, by itself, would make an entertaining collection.

In any case, I would be remiss if I didn’t remind the readers that the Digital Consciousness Theory completely explains all of this stuff. By “stuff,” I mean not just the anomalies, like the quantum zeno effect, entanglement, macroscopic coherence, the observer effect, and quantum retrocausality, but also the debates about microscopic vs. macroscopic, and thought experiments like the time that Einstein asked Abraham Pais whether he really believed that the moon existed only when looked at, to wit:

  • All we can know for sure is what we experience, which is subjective for every individual.
  • We effectively live in a virtual reality, operating in the context of a huge and highly complex digital substrate system. The purpose of this reality is for our individual consciousnesses to learn and evolve and contribute to the greater all-encompassing consciousness.
  • The reason that it feels “physical” or solid and not virtual is due to the consensus of experience that is built into the system.
  • This virtual reality is influenced and/or created by the conscious entities that occupy it (or “live in it” or “play in it”; chose your metaphor)
  • The virtual reality may have started prior to any virtual life developing, or it may have been suddenly spawned and initiated with us avatars representing the various life forms at any point in the past.
  • Some things in the reality need to be there to start; the universe, earth, water, air, and, in the case of the more recent invocation of reality, lots of other stuff. These things may easily be represented in a macroscopic way, because that is all that is needed in the system for the experience. Therefore, there is no need for us to create them.
  • However, other things are not necessary for our high level experience. But they are necessary once we probe the nature of reality, or if we aim to influence our reality. These are the things that are subject to the observer effect. They don’t exist until needed. Subatomic particles and their properties are perfect examples. As are the deep cause and effect relationships between reality elements that are necessary to create the changes that our intent is invoked to bring about.

So there is no need for objective collapse. Things are either fixed (the moon) or potential (the radioactive decay of a particle). The latter are called into existence as needed…

…Maybe

cat

Comments on the Possibilist Transactional Interpretation of Quantum Mechanics, aka Models vs. Reality

Reality is what it is. Everything else is just a model.

From Plato to Einstein to random humans like myself, we are all trying to figure out what makes this world tick. Sometimes I think I get it pretty well, but I know that I am still a product of my times, and therefore my view of reality is seen through the lens of today’s technology and state of scientific advancement. As such, I would be a fool to think that I have it all figured out. As should everyone else.

At one point in our recent past, human scientific endeavor wasn’t so humble. Just a couple hundred years ago, we thought that atoms were the ultimate building blocks of reality and everything could be ultimately described by equations of mechanics. How naïve that was, as 20th century physics made abundantly clear. But even then, the atom-centric view of physics was not reality. It was simply a model. So is every single theory and equation that we use today, regardless of whether it is called a theory or a law: Relativistic motion, Schrodinger’s equation, String Theory, the 2nd Law of Thermodynamics – all models of some aspect of reality.

We seek to understand our world and derive experiments that push forward that knowledge. As a result of the experiments, we define models to best fit the data.

One of the latest comes from quantum physicist Ruth Kastner in the form of a model that better explains the anomalies of quantum mechanics. She calls the model the Possibilist Transactional Interpretation of Quantum Mechanics (PTI), an updated version of John Cramer’s Transactional Interpretation of Quantum Mechanics (TIQM, or TI for short) proposed in 1986. The transactional nature of the theory comes from the idea that the wavefunction collapse behaves like a transaction in that there is an “offer” from an “emitter” and a “confirmation” from an “absorber.” In the PTI enhancement, the offers and confirmations are considered to be outside of normal spacetime and therefore the wavefunction collapse creates spacetime rather than occurs within it. Apparently, this helps to explain some existing anomalies, like uncertainty and entanglement.

This is all cool and seems to serve to enhance our understanding of how QM works. However, it is STILL just a model, and a fairly high level one at that. And all models are approximations, approximating a description of reality that most closely matches experimental evidence.

Underneath all models exist deeper models (e.g. string theory), many as yet to be supported by real evidence. Underneath those models may exist even deeper models. Consider this layering…

Screen Shot 2015-09-29 at 8.18.55 PM

Every layer contains models that may be considered to be progressively closer to reality. Each layer can explain the layer above it. But it isn’t until you get to the bottom layer that you can say you’ve hit reality. I’ve identified that layer as “digital consciousness”, the working title for my next book. It may also turn out to be a model, but it feels like it is distinctly different from the other layers in that, by itself, it is no longer an approximation of reality, but rather a complete and comprehensive yet elegantly simple framework that can be used to describe every single aspect of reality.

For example, in Digital Consciousness, everything is information. The “offer” is then “the need to collapse the wave function based on the logic that there is now an existing conscious observer who depends on it.” The “confirmation” is the collapse – the decision made from probability space that defines positions, spins, etc. This could also be seen as the next state of the state machine that defines such behavior. The emitter and absorber are both parts of the “system”, the global consciousness that is “all that there is.” So, if experimental evidence ultimately demonstrates that PTI is a more accurate interpretation of QM, it will nonetheless still be a model and an approximation. The bottom layer is where the truth is.

Elvidge’s Postulate of Countable Interpretations of QM…

The number of intepretations of Quantum Mechanics always exceeds the number of physicists.

Let’s count the various “interpretations” of quantum mechanics:

  • Bohm (aka Causal, or Pilot-wave)
  • Copenhagen
  • Cosmological
  • Ensemble
  • Ghirardi-Rimini-Weber
  • Hidden measurements
  • Many-minds
  • Many-worlds (aka Everett)
  • Penrose
  • Possibilist Transactional (PTI)
  • Relational (RQM)
  • Stochastic
  • Transactional (TIQM)
  • Von Neumann-Wigner
  • Digital Consciousness (DCI, aka Elvidge)

Unfortunately you won’t find the last one in Wikipedia. Give it about 30 years.

istock_000055801128_small-7dde4d0d485dcfc0d5fe4ab9e600bfef080121d0-s800-c85

The Berenstein Bears – The Smoking Gun of The Matrix?

Hollywood has had a great deal of fun with the ideas of time loops, alternate universes, reality shifts, and parallel timelines – “glitch in the Matrix”, “Groundhog Day”, “Back to the Future”, to name a few that have entered our collective consciousness.

But that’s just entertainment.

In our reality, once in a while, something seems to be amiss in a similar manner. Years ago, there was some speculation about the “Mandela Effect”, the idea that many people seem to have remembered that Nelson Mandela died in prison, which, of course, he didn’t.

At least not in this universe.

It seems that this was sort of a “soft glitch”, because only some people remembered the event – one of those cases where you don’t quite remember where you heard the news, but it is in your memory. Perhaps it was just an urban legend that got passed around through word of mouth.

Then, yesterday, one of my friends posted this link on Facebook about the apparent glitch in reality where the Berenstein Bears became the Berenstain Bears:

I remember it being pronounced “Ber-en-steen” and spelled “Berenstein.” Do you? Turns out that not only do all of the friends and colleagues who I asked, but also most of the people who have weighed in on various blogs and articles about this topic throughout the Internet and Twitterverse. The originators of the book series only recall their names as “Berenstain” and seem perplexed by everyone else’s recollection. Is it a case of mass confusion, an example of a parallel universe in action, or a rare and extreme piece of evidence that our reality is purely subjective?

MWI (Many Worlds Interpretation) Quantum theorists would have one possible yet incomplete explanation. In this theory, reality bifurcates constantly every time a quantum mechanical decision needs to be made (which occurs at the subatomic particle level countless times per second, and may be influenced by the observer effect). The figure below demonstrates. At some point, one of the ancestors of Stan and Jan Berenstein, the creators of the Berenstein Bear book series, encountered a situation where his name could have been spelled one of two ways. Perhaps, it was at Ellis Island, where such mistakes were common. For whatever reason, the universe bifurcated into one where the ancestor in question retained his original name, Berenstein, and another where the ancestor received a new spelling of his name, Berenstain (or vice versa; it doesn’t matter). Down the Berenstein path travelled we and/or all of our ancestors. Our doppelgängers went down the Berenstain path.

berenstein

According to MWI, all of these realities exist in something called Hilbert Space and there is no ability to travel from one to another. This is where MWI fails, because we are all in the Berenstain path now, but seem to remember the Berenstein path. So, for some reason (reality just messing with us?) we all jumped from one point in Hilbert Space to another. If Hilbert Space allowed for this, then this idea might have some validity. But it doesn’t. Furthermore, not everyone experienced the shift. Just ask the Berenstains. MWI can’t explain this.

The flaw is in the assumption that “we” are entirely in one of these realities. “We,” as has been discussed countless times in this blog and in my book, are experiencing a purely subjective experience. It is the high degree of consensus between each of us “conscious entities” that fools us into thinking that our reality is objective and deterministic. Physics experiments have proven beyond a reasonable doubt that it is not.

So what is going on?

My own theory, Digital Consciousness (fka “Programmed Reality”), has a much better, comprehensive, and perfectly consistent explanation (note: this has the same foundation as Tom Campbell’s theory, “My Big TOE”). See the figure below.

ATTI

“We” are each a segment of organized information in “all that there is” (ATTI). Hence, we feel individual, but are connected to the whole. (No time to dive into how perfectly this syncs with virtually every spiritual experience throughout history, but you probably get it.) The “Reality Learning Lab” (RLL) (Campbell) is a different set of organized information within ATTI. The RLL is what we experience every day while conscious. (While meditating, or in deep sleep, we are connected elsewhere) It is where all of the artifacts representing Berenstein or Berenstain exist. It is where various “simulation” timelines run. The information that represents our memories is in three places:

  1. The “brain” part of the simulation. Think of this as our cache.
  2. The temporary part of our soul’s record (or use the term “spirit”, “essence”, “consciousness”, “Being”, or whatever you prefer – words don’t matter), which we lose when we die. This is the stuff our “brain” has full access to, especially when our minds are quiet.
  3. The permanent part of our soul’s record; what we retain from life to life, what we are here to evolve and improve, what in turn contributes to the inexorable evolution of ATTI. Values and morality are here. Irrelevant details like the spelling of Berenstein don’t belong.

For some reason, ATTI decided that it made sense to replace Berenstein with Berenstain in all of the artifacts of our reality (books, search engine records, etc.) But, for some reason, the consciousness data stores did not get rewritten when that happened, and so we still have long-term recollection of “Berenstein.”

Why? ATTI just messing with us? Random experiment? Glitch?

Maybe ATTI is giving us subtle hints that it exists, that “we” are permanent, so that we use the information to correct our path?

We can’t know. ATTI is way beyond our comprehension.

Which came first, the digital chicken, or the digital philosophy egg?

Many scientists, mathematicians, futurists, and philosophers are embracing the idea that our reality is digital these days. In fact, it would be perfectly understandable to wonder if digital philosophy itself is tainted due to the tendency of humans to view ideas through the lens of their times. We live in a digital age, surrounded by computers, the Internet, and smart phones, and so might we not be guilty of imagining that the world behaves just as a multi-player video game does? We probably wouldn’t have had such ideas 50 years ago, when, at a macroscopic level at least, everything with which we interacted appeared analog and continuous. Which came first, the digital chicken, or the digital philosophy egg?

Actually, the concepts of binary and digital are not at all new. The I Ching is an ancient Chinese text that dates to 1150 BCE. In it are 64 combinations of 8 trigrams (aka the Bagua), each of which clearly contain the first three bits of a binary code. 547px-Bagua-name-earlier.svg

Many other cultures, including the Mangareva in Polynesia (1450), and Indian (5th to 2nd century BCE), have used binary encodings for communication for thousands of years. Over 12,000 years ago, African tribes developed a binary divination system called Odu Ifa.

German mathematician and philosopher Gottfried Leibniz is generally credited as developing the modern binary number system in 1679, based on zeros and ones. Naturally, all of these other cultures are ignored so that we can maintain the illusion that all great philosophical and mathematical thought originated in Europe. Regardless of Eurocentric biases, it is clear that binary encoding is not a new concept. But what about applying it to the fundamental construct of reality?

It turns out that while modern digital physics or digital philosophy references are replete with sources that only date to the mid-20th century, the ancient Greeks (namely Plato) believed that reality was discrete. Atoms were considered to be discrete and fundamental components of reality.

A quick clarification of the terms “discrete”, “digital”, “binary”, “analog”, and “continuous” is probably in order:

Discrete – Having distinct points of measurement in the time domain

Digital – Having properties that can be encoded into bits

Binary – Encoding that is done with only two digits, zeros and ones

Analog – Having continuously variable properties

Continuous – The time domain is continuous

So, for example, if we encode the value of some property (e.g. length or voltage) digitally using 3 values (0, 1, 2), that would be digital, but not binary (rather, ternery). If we say that between any two points in time, there is an infinitely divisible time element, but for each point, the value of the measurement being performed on some property is represented by bits, then we would have a continuous yet digital system. Conversely, if time can be broken into chunks such that at a fine enough temporal granularity there is no concept of time between two adjacent points in time, but at each of these time points, the value of the measurement being performed is continuously variable, then we would have a discrete yet analog system.

In the realm of consciousness-driven digital philosophy, it is my contention that the evidence strongly supports reality being discrete and digital; that is, time moves on in “chunks” and at each discrete point in time, every property of everything can be perfectly represented digitally. There are no infinities.

I believe that this is a logical and fundamental conclusion, regardless of the fact that we live in a digital age. There are many reasons for this, but for the purposes of this particular blog post, I shall only concentrate on a couple. Let’s break down the possibilities of our reality, in terms of origin and behavior:

  1. Type 1 – Our reality was created by some conscious entity and has been following the original rules established by that entity. Of course, we could spend a lifetime defining “conscious” or “entity” but let’s try to keep it simple. This scenario could include traditional religious origin theories (e.g. God created the heavens and the earth). It could also include the common simulation scenarios, a la Nick Bostrom’s “Simulation Argument.”
  1. Type 2 – Our reality was originally created by some conscious entity and has been evolving according to some sort of fundamental evolutionary law ever since.
  1. Type 3 – Our reality was not created by some conscious entity, and its existence sprang out of nothing and has been following primordial rules of physics ever since. To explain the fact that our universe is incredibly finely-tuned for matter and life, materialist cosmologists dreamt up the idea that we must exist in an infinite set of parallel universes, and via the anthropic principle, the one we live only appears finely-tuned because it has to in order for us to be in it. Occam would be turning over in his grave.
  1. Type 4 – Our reality was not created by some particular conscious entity, but rather has been evolving according to some sort of fundamental evolutionary law from the very beginning.

I would argue that in the first two cases, reality would have to be digital. For, if a conscious entity is going to create a world for us to live in and experience, that conscious entity is clearly highly evolved compared to us. And, being so evolved, it would certainly make use of the most efficient means to create a reality. A continuous reality is not only inefficient, it is theoretically impossible to create because it involves infinities in the temporal domain as well as any spatial domain or property.

pixelated200I would also argue that in the fourth case, reality would have to be digital for similar reasons. Even without a conscious entity as a creator, the fundamental evolutionary law would certainly favor a perfectly functional reality that doesn’t require infinite resources.

Only in the third case above, would there be any possibility of a continuous analog reality. Even then, it is not required. As MIT cosmologist and mathematician Max Tegmark succinctly put it, “We’ve never measured anything in physics to more than about sixteen significant digits, and no experiment has been carried out whose outcome depends on the hypothesis that a true continuum exists, or hinges on nature computing something uncomputable.” Hence there is no reason to assume, a priori, that the world is continuous. In fact, the evidence points to the contrary:

  • Infinite resolution would imply that matter implodes into black holes at sub-Planck scales and we don’t observe that.
  • Infinite resolution implies that relativity and quantum mechanics can’t coexist, at least with the best physics that we have today. Our favorite contenders for rationalizing relativity and quantum mechanics are string theory and loop quantum gravity. And they only work with minimal length (aka discrete) scales.
  • We actually observe discrete behavior in quantum mechanics. For example, a particle’s spin value is always quantized; there are no intermediate states. This is anomalous in continuous space-time.

For many other reasons, as are probably clear from the evidence compiled on this site, I tend to favor reality Type 4. No other type of reality structure and origin can be shown to be anywhere near as consistent with all of the evidence (philosophical, cosmological, mathematical, metaphysical, and experimental). And it has nothing to do with MMORPGs or the smart phone in my pocket.

Macroscopic Coherence Explained

Coherence is a general property of a system whereby the components of that system all act in a similar manner. Coherent light is what makes lasers what they are – an alignment of photons, or waveform phases (why cats chase them is a little harder to explain). Superconductivity, a property of zero resistance to electrical flow that was formerly only observed at temperatures near absolute zero, is closely related in that the atoms of the superconducting material are aligned coherently. Quantum entanglement is an example of perfect coherence between two or more particles, in that they act as a single particle no matter how far away from each other you take them. Einstein famously referred to this property as “spooky action at a distance.” The Bose-Einstein condensate is another state of matter that exists at extremely low temperatures and involves a system of particles that have all achieved the lowest quantum state, and hence, are coherent.

Over the years, clever experimental scientists have pushed the boundaries of coherence from extreme cryogenics and quantum scales to room temperatures and macroscopic scales. Author and fellow truth seeker Anthony Peake posted an article today about experiments that are being done at various research institutes which demonstrate how the contents of liquid containers connected by arbitrarily thin channels exhibit “action at a distance” macroscopically.

Once again, such anomalies have scientists scratching their heads for explanations; that is, scientists who cling to the never-proven pre-assumed dogma of objective materialism. Entanglement and macroscopic action at a distance find no home in this religion.

However, over here at “Consciousness-based Digital Reality” Central, we enjoy the simplicity of fitting such anomalies into our model of reality. 🙂

It all follows from three core ideas:

  1. That all matter is ultimately comprised of data (“it from bit” as John Wheeler would say) and that forces are simply the rules of how the complex data structures that form particles interact with each other.
  1. That consciousness, which is also organized data, interacts with the components of reality according to other rules of the overall system (this greater System being “reality”, “the universe”, God, “all that there is” or whatever you want to call it).
  1. The System evolves according to what Tom Campbell calls the “Fundamental Rule.” Similar to evolution, the system changes state and evolves in the direction of more profitable or useful states and away from less useful states.

Because of #3, our system has evolved to be efficient. As such, it would likely not be wasteful. So, when an observer observes (consciousness interacts with) a pair of particles in proximity to each other, the system sets their states (collapsing the wave function) and the rules of their behavior (a finite state machine) to be coherent simply out of efficiency. That is, each particle is set to the same finite state machine, and forever behaves that way no matter how far apart you take them (distance being a virtual concept in a virtual digital world).

So what prevents the same logic from applying to macroscopic collections of coherent particles? Nothing. In fact, it is inevitable. These clever scientists have learned methods to establish a coherent identical quantum state across huge quantities of particles (aka macroscopic). At the point in which the experimenter creates this state and observes it, the system establishes the state machines for all of them at once, since they are all to be in the same quantum state. And so we get room temperature superconductivity and macroscopic containers of liquid that demonstrate non-locality.

carl