How to Survive an AI Apocalypse – Part 9: The Stabilization Effect

PREVIOUS: How to Survive an AI Apocalypse – Part 8: Fighting Back

Here’s where it gets fun.

Or goes off the rails, depending on your point of view.

AI meets Digital Philosophy meets Quantum Mechanics meets UFOs.

This entire blog series has been about surviving an AI-based Apocalypse, a very doomsday kind of event. For some experts, this is all but inevitable. You readers may be coming to a similar conclusion.

But haven’t we heard this before? Doomsday prophesies have been around as long as… Keith Richards. The Norse Ragnarök, The Hindu prophecy of the end of times during the current age of Kaliyuga, the Zoroastrian Renovation, and of course, the Christian Armageddon. An ancient Assyrian tablet dated 2800-2500 BCE tells of corruption and unruly teenagers and prophecies that “earth is in its final days; the world is slowly deteriorating into a corrupt society that will only end with its destruction.” Fast forward to the modern era, where the Industrial Revolution was going to lead to the world’s destruction. We have since had the energy crisis, the population crisis, and the doomsday clock ticking down to nuclear armageddon. None of it ever comes to pass.

Is the AI apocalypse more of the same, or is it frighteningly different in some way? This Part 9 of the series will examine such questions and present a startling conclusion that all may be well.

THE NUCLEAR APOCALYPSE

To get a handle on the likelihood of catastrophic end times, let’s take a deep dive into the the specter of a nuclear holocaust.

It’s hard for many of us to appreciate what a frightening time it was in the 1950s, as people built fallout shelters and children regularly executed duck and cover drills in the classrooms.

Often considered to be the most dangerous point of the cold war, the 1962 Cuban Missile Crisis was a standoff between the Soviet Union and the United States involving the deployment of Soviet missiles in Cuba. At one point the US Navy began dropping depth charges to force a nuclear-armed Soviet submarine to surface. The crew on the sub, having had no radio communication with the outside world didn’t know if war was breaking out or not. The captain, Valentin Savitsky, wanted to launch a nuclear weapon, but a unanimous decision among the three top officers was required for launch. Vasily Arkhipov, the second in command, was the sole dissenting vote and even got into an argument with the other two officers. His courage effectively prevented the nuclear war that was likely to result. Thomas S Blanton, later the director of the US National Security Archive called Arkhipov “the man who saved the world.”

But that wasn’t the only time we were a hair’s breadth away from the nuclear apocalypse.

On May 23, 1967, US military commanders issued a high alert due to what appeared to be jammed missile detection radars in Alaska, Greenland, and the UK. Considered to be an act of war, they authorized preparations for war, including the deployment of aircraft armed with nuclear weapons. Fortunately, a NORAD solar forecaster identified the reason for the jammed radar – a massive solar storm.

Then, on the other side of the red curtain, on 26 September 1983, with international tensions still high after the recent Soviet military shoot down of Korean Air Lines Flight 007, a nuclear early-warning system in Moscow reported that 5 ICBMs (intercontinental ballistic missiles) had been launched from the US. Lieutenant colonel Stanislav Petrov was the duty officer at the command center and suspected a false alarm, so he awaited confirmation before reporting, thereby disobeying Soviet protocol. He later said that had he not been on the shift at that time, his colleagues would have reported the missile launch, likely triggering a nuclear war.

In fact, over the years there have been at least 21 nuclear war close calls, any of which could easily led to a nuclear conflagration and the destruction of humanity. The following timeline, courtesy of the Future of Life Institute, shows how many occurred in just the 30-year period from 1958 to 1988.

It kinds of makes you wonder what else could go wrong…

END OF SOCIETY PREDICTED

Another modern age apocalyptic fear was driven by the recognition that exponential growth and limited resources are ultimately incompatible. At the time, the world population was growing exponentially and important resources like oil and arable land were being depleted. The Rockefeller Foundation partnered with the OECD (Organization for Economic Cooperation and Development) to form The Club of Rome, a group of current and former heads of state, scientists, economists, and business leaders to discuss the problem and potential solutions. In 1972, with the support of computational modeling from MIT, they issued their first report, The Limits to Growth, which painted a bleak picture of the world’s future. Some of the predictions (and their ultimate outcomes) follow:

Another source for this scare was the book The Population Bomb by Stanford biologist Paul Ehrlich. He and people like Harvard biologist George Wald also made some dire predictions…

There is actually no end to failed environmental apocalyptic predictions – too many to list. But a brief smattering includes:

  • “Unless we are extremely lucky, everyone will disappear in a cloud of blue steam in 20 years.” (New York Times, 1969)
  • “UN official says rising seas to ‘obliterate nations’ by 2000.” (Associated Press, 1989)
  • “Britain will Be Siberian in less than 20 years” (The Guardian, 2004)
  • “Scientist Predicts a New Ice Age by 21th Century” (Boston Globe, 1970)
  • “NASA scientist says we’re toast. In 5-10 years, the arctic will be ice free.” (Associated Press, 2008)

Y2K

And who could forget this apocalyptic gem…

My intent is not to cherry pick the poor predictions and make fun of them. It is simply that when we are swimming in the sea of impending doom, it is really hard to see the way out. And yet, there does always seem to be a way out. 

Sometimes it is mathematical. For example, there was a mathematical determination of when we would run out of oil based on known supply and rate of usage, perhaps factoring in the trend of increase in rate of usage. But what were not factored into the equation were the counter effects of the rate of new reserves being discovered and the improvements in engine efficiencies. One could argue that in the latter case, the scare achieved its purpose, just as the fear of global warming has resulted in a number of new environmental policies and laws, such as California’s upcoming ban on gasoline powered vehicles in 2035. However, that isn’t always the case. Many natural resources, for instance, seem to actually be increasing in supply. I am not necessarily arguing for something like the abiotic oil theory. However, at the macro level, doesn’t it sometimes feel like a game of civilization, where we are given a set of resources, cause and effect interrelationships, and ability to acquire certain skills. In the video game, when we fail on an apocalyptic level, we simply hit the reset button and start over. But in real life we can’t do that. Yet, doesn’t it seem like the “game makers” always hand us a way out, such as unheard of new technologies that are seemingly suddenly enabled? And it isn’t always human ingenuity that saves us? Sometimes, the right person is on duty at the perfect time against all odds. Sometimes, oil fields magically replenish on their own. Sometimes asteroids strike the most remote place on the planet.

THE STABILIZATION EFFECT

In fact, it seems statistically significant that apocalypses, while seemingly imminent, NEVER really occur. So much so that I decided to model it with a spreadsheet using random number generation (also demonstrating how weak my programming skills have gotten). The intent of the model is to encapsulate the state of humanity on a simple timeline using a parameter called “Mood” for lack of a better term. We start at a point in society that is neither euphoric (the Roaring Twenties) nor disastrous (the Great Depression). As time progresses, events occur that push the Mood in one direction or the other, with a 50/50 chance of either occurring. The assumption in this model is that no matter what the Mood is, it can still get better or worse with equal probability. Each of the following graphs depicts a randomly generated timeline.

On the graph are two thresholds – one of a positive nature, where things seemingly can’t get much better, and one of a negative nature, whereby all it should take is a nudge to send us down the path to disaster. In any of the situations we’ve discussed in this part of the series, when we are on the brink of apocalypse, the statistical likelihood that the situation would improve at that point should not be more than 50/50. If true, running a few simulations shows that an apocalypse is actually fairly likely. Figures 1 and 3 pop over the positive limit and then turn back toward neutral. Figure 2 seems to take off in the positive direction even after passing the limit. Figure 4 hits and goes through the negative limit several times, implying that if our reality actually worked this way, apocalyptic situations would actually be likely.

However, what always seems to happen is that when things get that bad, there is a stabilizing force of some sort. I made an adjustment to my reality model by inserting some negative feedback to model this stabilizing effect. For those unfamiliar with the term, complex systems can have positive or negative feedback loops; often both. Negative feedback tends to bring a system back to a stable state. Examples in the body include the maintenance of body temperature and blood sugar levels. If blood sugar gets too high, the pancreas secretes insulin which chemically reduces the level. When it gets too low, the pancreas secretes glucagon which increases the level. In nature, when the temperature gets high, cloud level increases, which provides the negative feedback needed to reduce the temperature. Positive feedback loops also exist in nature. The runaway greenhouse effect is a classic example.

When I applied the negative feedback to the reality model, all curves tended to stay within the positive and negative limits, as show below.

Doesn’t it feel like this is how our reality works at the most fundamental level? But how likely would it be that every aspect of our reality is subject to negative feedback? And where does that negative feedback come from?

REALITY IS ADAPTIVE

This is how I believe that reality works at its most fundamental level…

Why would that be? Two obvious ideas come to mind.

  1. Natural causes – this would be the viewpoint of reductionist materialist scientists. Heat increase causes ice sheets to melt which creates more water vapor, generating more clouds, reducing the heating effect of the sun. But this does not at all explain why the human condition, and the civilization trends that we’ve discussed in this article, always tend toward neutral.
  2. God – this would be the viewpoint of people whose beliefs are firmly grounded in their religion. God is always intervening to prevent catastrophes. But apparently God doesn’t mind minor catastrophes and plenty of pain and suffering in general. More importantly though, this does not explain dynamic reality generation.

DYNAMIC REALITY GENERATION

Enter Quantum Mechanics.

The Double-slit experiment was first done by Thomas Young back in 1801, and was an attempt to determine if light was composed of particles or waves. A beam of light was projected at a screen with two vertical slits. If light was composed of particles, only two bands of light should be on the phosphorescent screen behind the one with the slits. If wave-based, an interference pattern should result. The wave theory was initially confirmed experimentally, but that was later called into question by Einstein and others. 

The experiment was later done with particles, like electrons, and it was clearly assumed that these would be shown to be hard fixed particles, generating the expected pattern shown on the right.

However, what resulted was an interference pattern, implying that the electrons were actually waves. Thinking that perhaps electrons were interfering with each other, the experiment was modified to shoot one electron at a time. And still the interference pattern slowly build up on the back screen.

To make sense of the interference pattern, experimenters wondered if they could determine which slit each electron went through, so they put a detector before the double list. Et voila, the interference pattern disappeared! It was as if the actual conscious act of observation converted the electrons from waves to particles. The common interpretation was that the electrons actual exist only a probability function and the observation actually snaps them into existence.

It is very much like the old adage that a tree falling in the woods makes no sound unless someone is there to see it. Of course, this idea of putting consciousness as a parameter in the equations of physics generated no end of consternation for the deterministic materialists. They have spent the last twenty years designing experiments to disprove this “Observer Effect” to no avail. Even when the “which way” detector is place after the double slit, the interference pattern disappears. The only tenable conclusion is that reality does not exist in an objective manner and its instantiation depends on something. But what?

The diagram below helps us visualize the possibilities. When does reality come into existence?

Clearly it is not at points 1, 2 or 3, because it isn’t until the “which way” detector is installed that we see the shift in reality. So is it due to the detector itself or the conscious observer reading the results of the detector. One could image experiments where the results of the “which way” detector are hidden from the conscious observer for an arbitrary period of time; maybe printed out and put in an envelope without looking, where it sits on the shelf for a day while the interference pattern exists. And someone opens the envelope and suddenly the interference pattern disappears. I have always suspected that the answer will be that reality comes into existence at point 4. I believe that it is just logical that a reality generating universe be efficient. Recent experiments bear this out.

I believe this says something incredibly fundamental about the nature of our reality. But what would efficiency have to do with the nature of reality? Let’s explore a little further – what kinds of efficiencies would this lead to?

POP QUIZ! – is reality analog or digital? There is actually no conclusion to this question and many papers have been written in support of either point of view. But if our reality is created on some sort of underlying construct, there is only one answer – it has to be digital. Here’s why…

How much information would it take to fully describe the cup of coffee on the right?

In an analog reality, it would take an infinite amount of information.

In a digital reality, fully modeled at the Planck resolution (what some people think is the deepest possible digital resolution), it would require 4*1071 bits/second give or take. It’s a huge number for sure, but infinitely less than the analog case.

But wait a minute.  Why would we need that level of information to describe a simple cup of coffee? So let’s ask a different question… How much information is needed for a subjective human experience of that cup of coffee – the smell, the taste, the visual experience. You don’t really need to know the position and momentum vector of each subatomic particle in each molecule of coffee in that cup. All you need to know is what it takes to experience it. The answer is roughly 1*109 bits/second. In other words, there could be as much as a 4*1062 factor of compression involved in generating a subjective experience. In other words, we don’t really need to know where each electron is in the coffee, just as you don’t need to know which slit each electron goes through in the double slit experiment. That is, UNTIL YOU MEASURE IT!

So, the baffling results of the double slit experiments actually make complete sense if reality is:

  • Digital
  • Compressed
  • Dynamically generated to meet the needs of the inhabitants of that reality

Sounds computational doesn’t it? In fact, if reality were a computational system, it would make sense for it to need to have efficiencies at this level. 

There are such systems – one well known example is a video game called No Man’s Sky that dynamically generates its universe as the user plays the game. Art inadvertently imitating life?

Earlier in this article I suggested that the concept of God could explain the stabilization effect of our reality. If we redefine “God” to mean “All That There Is” (of which, our apparent physical reality is only a part), reality becomes a “learning lab” that needs to be stable for our consciousnesses to interact virtually.

I wrote about this and proposed this model back in 2007 in my first book “The Universe-Solved!.”  In 2021, an impressive set of physicists and technologists came up with the same theory, which they called “The Autodidactic Universe.” They collaborated to explore methods, structures, and topologies in which the universe might be learning and modifying its laws according to what is needed. Such ideas included neural nets and Restricted Boltzman Machines. This provides an entirely different way of looking at any potential apocalypse. And it make you wonder…

UFO INTERVENTION

In 2021, over one hundred military personnel, including Retired Air Force Captain Robert Salas, Retired First Lieutenant Robert Jacobs, and Retired Captain David Schindele met at the National Press Club in Washington, DC to present historical case evidence that UFOs have been involved with disarming nuclear missiles. A few examples…

  • Malmstrom Air Force Base, Montana, 1967 – “a large glowing, pulsating red oval-shaped object hovering over the front gate,” as alarms went off showing nearly all 10 missiles shown in the control room had been disabled.
  • Minot Air Force Base, North Dakota, 1966 – Eight airmen said that 10 missiles at silos in the vicinity all went down with guidance and control malfunctions when an 80- to 100-foot wide flying object with bright flashing lights had hovered over the site.
  • Vandenberg Air Force Base, California, 1964 – “It went around the top of the warhead, fired a beam of light down on the top of the warhead.” After circling, it “then flew out the frame the same way it had come in.”
  • Ukraine, 1982 – launch countdowns were activated for 15 seconds while a disc-shaped UFO hovered above the base, according to declassified KGB documents

As the History Channel reported, areas of high UFO activity are correlated with nuclear and military facilities worldwide.

Perhaps UFOs are an artifact of our physical reality learning lab, under the control of some conscious entity or possibly even an autonomous (AI) bot in the system. As part of the “autodidactic” programming mechanisms that maintain stability in our programmed reality. Other mechanisms could involve things like adjusting the availability of certain resources or even nudging consciousnesses toward solutions to problems. If this model of reality is accurate, we may find that we have little to worry about regarding an AI apocalypse. Instead it will just be another force that contributes toward our evolution.

To that end, there is also a sector of thinkers who recommend a different approach. Rather than fight the AI progression, or simply let the chips fall, we should welcome our AI overlords and merge with them. That scenario will be explored in Part 10 of this series.

NEXT: How to Survive an AI Apocalypse – Part 10: If You Can’t Beat ’em, Join ’em

Wigner’s Friend likes Digital Consciousness

Apparently your reality may be different than mine. Wait, what???

Several recent studies have demonstrated to an extremely high degree of certainty that objective reality does not exist. This year, adding to the mounting pile of evidence for a consciousness-centric reality, came the results of an experiment that for the first time, tested the highly paradoxical Wigner’s Friend thought experiment. The conclusion was that your reality and my reality can actually be different. I don’t mean different in the sense that your rods and cones have a different sensitivity, or that your brain interprets things differently, but fundamentally intrinsically different. Ultimately, things may happen in your reality that might not happen in my reality and vice versa.

Almost sounds like a dream, doesn’t it? Or like you and I are playing some kind of virtual reality game and the information stream that is coming into your senses via your headset or whatever is different that the information stream coming into mine.

BINGO! That’s Digital Consciousness in a nutshell.

Eugene Paul Wigner received the Nobel Prize for physics in 1963 for his work on quantum mechanics and the structure of the atom. More importantly perhaps, he, along with Max Planck, Neils Bohr, John Wheeler, Kurt Godel, Erwin Schrodinger, and many other forward thinking scientists and mathematicians, opposed the common materialistic worldview shared by most scientists of his day (not to mention, most scientists of today). As such, he was an inspiration for, and a forerunner of consciousness-centric philosophies, such as my Digital Consciousness, Donald Hoffman’s MUI theory, and Tom Campbell’s My Big TOE.

As if Schrodinger’s Cat wasn’t enough to bend people’s minds, Wigner raised the stakes of quantum weirdness in 1961 when he proposed a thought experiment, referred to as “Wigner’s Friend.” In the scenario are two people, let’s say Wigner and his friend. One of them is in an enclosed space, hidden to the other and observes something like Schrodinger’s cat, further hidden in a box. At the time Wigner opens the box the wave function collapses, establishing whether or not the cat is dead. But the cat is still in superposition to Wigner’s friend, outside of the entire subsystem. Only when he opens the door to see Wigner and the result of the cat experiment, does his wave function collapse. Therefore, Wigner and his friend have differing interpretations of when reality become realized; hence different realities.

Fast forward to 2019, and scientists (Massimiliano Proietti, Alexander Pickston, Francesco Graffitti, Peter Barrow, Dmytro Kundys, Cyril Branciard, Martin Ringbauer, and Alessandro Fedrizzi) at Heriot-Watt University in Edinburgh, were finally able to test the paradox using double slits, lasers, and polarizers. The results confirmed Wigner’s hypothesis to a certainty of 5 standard deviations, which essentially means that objective reality doesn’t exist, and your and my realities can differ- to a certainty of 1 in 3.5 million!

Of course, I live for this stuff, because it simply adds one more piece of supporting evidence to my theory, Digital Consciousness. And it adds yet another nail in the coffin of that ancient scientific religion, materialism.

How does it work?

Digital Consciousness asserts that consciousness is primary; hence, all that we can truly know is what we each experience subjectively.  This experiment doesn’t necessarily prove that the fundamental construct of reality is information, but it is a lot more plausible that individual experiences based on virtual simulations are at the root of this paradox rather than, say, a complex violation of Hilbert space, allowing parallel realities based on traditional physical fields to intermingle.  As an analogy, imagine that you are playing an MMORPG (video game with many other simultaneous players) – it isn’t difficult to see how each individual could be having a slightly different experience, based perhaps on their skill level or something.  As information is the carrier of the experience, the information entering the consciousness of one player could easily be slightly different than the information entering the consciousness of another player. This is by far the simplest explanation, and by Occam’s Razor, supports my theory.

Too bad Wigner isn’t alive to see this experiment, or to ponder Digital Consciousness theory. But I’m sure his consciousness is having a good laugh.

 

Will Evolving Minds Delay The AI Apocalypse? – Part II

The idea of an AI-driven Apocalypse is based on AI outpacing humanity in intelligence. The point at which that might happen depends on how fast AI evolves and how fast (or slow) humanity evolves.

In Part I of this article, I demonstrated how, given current trends in the advancement of Artificial Intelligence, any AI Apocalypse, Singularity, or what have you, is probably much further out that the transhumanists would have you believe.

In this part, we will examine the other half of the argument by considering the nature of the human mind and how it evolves. To do so, it is very instructive to consider the nature of the mind as a complex system and also the systemic nature of the environments that minds and AIs engage with, and are therefore measured by in terms of general intelligence or AGI.

David Snowden has developed a framework of categorizing systems called Cynefin. The four types of systems are:

  1. Simple – e.g. a bicycle. A Simple system is a simple deterministic system characterized by the fact that most anyone can make decisions and solve problems regarding such systems – it takes something called inferential intuition, which we all have. If the bicycle seat is loose, everyone knows that to fix it, you must look under the seat and find the hardware that needs tightening.
  2. Complicated – e.g. a car. Complicated systems are also deterministic systems, but unlike Simple systems, solutions to problems in this domain are not obvious and typically require analysis and/or experts to figure out what is wrong. That’s why you take your car to the mechanic and why we need software engineers to fix defects.
  3. Complex – Complex systems, while perhaps deterministic from a philosophical point of view, are not deterministic in any practical sense. No matter how much analysis you apply and no matter how experienced the expert is, they will not be able to completely analyze and solve a problem in a complex system. That is because such systems are subject to an incredibly complex set of interactions, inputs, dependencies, and feedback paths that all change continuously. So even if you could apply sufficient resources toward analyzing the entire system, by the time you got your result, your problem state would be obsolete. Examples of complex systems include ecosystems, traffic patterns, the stock market, and basically every single human interaction. Complex systems are best addressed through holistic intuition, which is something that humans possess when they are very experienced in the applicable domain. Problems in complex systems are best addressed by a method called Probe-Sense-Respond, which consists of probing (doing an experiment designed intuitively), sensing (observing the results of that experiment), and responding (acting on those results by moving the system in a positive direction).
  4. Chaotic – Chaotic systems are rarely occurring situations that are unpredictable because they are novel and therefore don’t follow any known patterns. An example would be the situation in New York City after 9/11. Responding to chaotic systems requires an even different method than with other types of systems. Typically, just taking some definitive form of action may be enough to move the system from Chaotic to Complex. The choice of action is a deeply intuitive decision that may be based on an incredibly deep, rich, and nuanced set of knowledge and experiences.

Complicated systems are ideal for early AI. Problems like the ones analyzed in Stanford’s AI Index, such as object detection, natural language parsing, language translation, speech recognition, theorem proving, and SAT solving are all Complicated systems. AI technology at the moment is focused mostly on such problems, not things in the Complex domain, which are instead best addressed by the human brain. However, as processing speed evolves, and learning algorithms evolve, AI will start addressing issues in the Complex domain. Initially, to program or guide the AI systems toward a good sense-and-respond model a human mind will be needed. Eventually perhaps, armed with vague instructions like “try intuitive experiments from a large set of creative ideas that may address the issue,” “figure out how to identify the metrics that indicate a positive result from the experiment,” “measure those metrics,” and “choose a course of action that furthers the positive direction of the quality of the system,” an AI may succeed at addressing problems in the Complex domain.

The human mind of course already has a huge head start. We are incredibly adept at seeing vague patterns, sensing the non-obvious, seeing the big picture, and drawing from collective experiences to select experiments to address complex problems.

Back to our original question, as we lead AI toward developing the skills and intuition to replicate such capabilities, will we be unable to evolve our thinking as well?

In the materialist paradigm, the brain is the limit for an evolving mind. This is why we think the AI can out evolve us, because the brain capacity is fixed. However, in “Digital Consciousness” I have presented a tremendous set of evidence that this is incorrect. In actuality, consciousness, and therefore the mind, is not emergent from the brain. Instead it exists in a deeper level of reality as shown in the Figure below.

It interacts with a separate piece of ATTI that I call the Reality Learning Lab (RLL), commonly known as “the reality we live in,” but more accurately described as our “apparent physical reality” – “apparent” because it is actually Virtual.

As discussed in my blog on creating souls, All That There Is (ATTI) has subdivided itself into components of individuated consciousness, each of which has a purpose to evolve. How it is constructed, and how the boundaries are formed that make it individuated is beyond our knowledge (at the moment).

So what then is our mind?

Simply put, it is organized information. As Tom Campbell eloquently expressed it, “The digital world, which subsumes the virtual physical world, consists only of organization – nothing else. Reality is organized bits.”

As such, what prevents it from evolving in the deeper reality of ATTI just as fast as we can evolve an AI here in the virtual reality of RLL?

Answer – NOTHING!

Don’t get hung up on the fixed complexity of the brain. All our brain is needed for is to emulate the processing mechanism that appears to handle sensory input and mental activity. By analogy, we might consider playing a virtual reality game. In this game we have an avatar and we need to interact with other players. Imagine that a key aspect of the game is the ability to throw a spear at a monster or to shoot an enemy. In our (apparent) physical reality, we would need an arm and a hand to be able to carry out that activity. But in the game, it is technically not required. Our avatar could be arm-less and when we have the need to throw something, we simply press a key sequence on the keyboard. A spear magically appears and gets hurled in the direction of the monster. Just as we don’t need a brain to be aware in our waking reality (because our consciousness is separate from RLL), we don’t need an arm to project a spear toward an enemy in the VR game.

On the other hand, having the arm on the avatar adds a great deal to the experience. For one thing, it adds complexity and meaning to the game. Pressing a key sequence does not have a lot of variability and it certainly doesn’t provide the player with much control. The ability to hit the target could be very precise, such as in the case where you simply point at the target and hit the key sequence. This is boring, requires little skill and ultimately provides no opportunity to develop a skill. On the other hand, the precision of your attack could be dependent on a random number generator, which adds complexity and variability to the game, but still doesn’t provide any opportunity to improve. Or, the precision of the attack could depend on some other nuance of the game, like secondary key sequences, or timing of key sequences, which, although providing the opportunity to develop a skill, have nothing to do with a consistent approach to throwing something. So, it is much better to have your avatar have an arm. In addition, this simply models the reality that you know, and people are comfortable with things that are familiar.

So it is with our brains. In our virtual world, the digital template that is our brain is incapable of doing anything in the “simulation” that it isn’t designed to do. The digital simulation that is the RLL must follow the rules of RLL physics much the way a “physics engine” provides the rules of RLL physics for a computer game. And these rules extend to brain function. Imagine if, in the 21st century, we had no scientific explanation for how we process sensory input or make mental decisions because there was no brain in our bodies. Would that be a “reality” that we could believe in? So, in our level of reality that we call waking reality, we need a brain.

But that brain “template” doesn’t limit the ability for our mind to evolve any more than the lack of brain or central nervous system prevents a collection of single celled organisms called a slime mold from actually learning.

In fact, there is some good evidence for the idea that our minds are evolving as rapidly as technology. Spiral Dynamics is a model of the evolution of values and culture that can be applied to individuals, institutions, and all of humanity. The figure below depicts a very high level overview of the stages, or memes, depicted by the model.

Spiral Dynamics

Each of these stages represents a shift in values, culture, and thinking, as compared to the previous. Given that it is the human mind that drives these changes, it is fair to say that the progression models the evolution of the human mind. As can be seen by the timeframes associated with the first appearance of each stage of humanity, this is an exponential progression. In fact, this is the same kind of progression that Transhumanists used to prove exponential progression of technology and AI. This exponential progression of mind would seem to defy the logic that our minds, if based on fixed neurological wiring, are incapable of exponential development.

And so, higher level conscious thought and logic can easily evolve in the human mind in the truer reality, which may very well keep us ahead of the AI that we are creating in our little virtual reality. The trick is in letting go of our limiting assumptions that it cannot be done, and developing protocols for mental evolution.

So, maybe hold off on buying those front row tickets to the Singularity.

Quantum Retrocausality Explained

A recent quantum mechanics experiment, conducted at the University of Queensland in Australia, seems to defy causal order, baffling scientists. In this post however, I’ll explain why this isn’t anomalous at all; at least, if you come to accept the Digital Consciousness Theory (DCT) of reality. It boils down to a virtually identical explanation that I gave seven years ago to Daryl Bem’s seemingly anomalous precognition studies.

DCT says that subatomic particles are controlled by finite state machines (FSMs), which are tiny components of our Reality Learning Lab (RLL, aka “reality”).  These finite state machines that control the behavior of the atoms or photons in the experiment don’t really come into existence until the measurement is made, which effectively means that the atom or photon doesn’t really exist until it needs to. In RLL, the portion of the system that needs to describe the operation of the laser, the prisms, and the mirrors, at least from the perspective of the observer, is defined and running, but only at a macroscopic level. It only needs to show the observer the things that are consistent with the expected performance of those components and the RLL laws of physics. So, for example, we can see the laser beam. But only when we need to determine something at a deeper level, like the path of a particular photon, is a finite state machine for that proton instantiated. And in these retrocausality experiments, like the delayed choice quantum eraser experiments, and this one done in Queensland, the FSMs only start when the observation is made, which is after the photon has gone through the apparatus; hence, it never really had a path. It didn’t need to. The path can be inferred later by measurement, but it is incorrect to think that that inference was objective reality. There was no path, and so there was no real deterministic order of operation.

There are only the attributes of the photon determined at measurement time, when its finite state machine comes into existence. Again, the photon is just data, described by the attributes of the finite state machine, so this makes complete sense. Programmatically, the FSM did not exist before the individuated consciousness required a measurement because it didn’t need to. Therefore, the inference of “which operation came first” is only that – an inference, not a true history.

So what is really going on?  There are at least three options:

1. Evidence is rewritten after the fact.  In other words, after the photons pass through the experimental apparatus, the System goes back and rewrites all records of the results, so as to create the non-causal anomaly.  Those records consist of the experimenters memories, as well as any written or recorded artifacts.  Since the System is in control of all of these items, the complete record of the past can be changed, and no one would ever know.

2. The System selects the operations to match the results, so as to generate the non-causal anomaly.

3. We live in an Observer-created reality and the entire sequence of events is either planned out or influenced by intent, and then just played out by the experimenter and students.

The point is that it requires a computational system to generate such anomalies; not the deterministic materialistic continuous system that mainstream science has taught us that we live in.

Mystery solved, Digital Consciousness style.

Why the Universe Only Needs One Electron

According to renowned physicist Richard Feynman (recounted during his 1965 Nobel lecture)…

“I received a telephone call one day at the graduate college at Princeton from Professor Wheeler, in which he said, ‘Feynman, I know why all electrons have the same charge and the same mass.’ ‘Why?’ ‘Because, they are all the same electron!’”

John Wheeler’s idea was that this single electron moves through spacetime in a continuous world line like a big knot, while our observation of many identical but separate electrons is just an illusion because we only see a “slice” through that knot. Feynman was quick to point out a flaw in the idea; namely that if this was the case we should see as many positrons (electrons moving backward in time) as electrons, which we don’t.

But Wheeler, also known for his now accepted concepts like wormholes, quantum foam, and “it from bit”, may have been right on the money with this seemingly outlanish idea.

As I have amassed a tremendous set of evidence that our reality is digital and programmatic (some of which you can find here as well as many other blog posts), I will assume that to be the case and proceed from that assumption.

Next, we need to invoke the concept of a Finite State Machine (FSM), which is simply a computational system that is identified by a finite set of states whereby the rules that determine the next state are a function of the current state and one or more input events. The FSM may also generate a number of “outputs” which are also logical functions of the current state.

The following is an abstract example of a finite state machine:

A computational system, like that laptop on your desk that the cat sits on, is by itself a finite state machine. Each clock cycle gives the system a chance to compute a new state, which is defined by a logical combination of the current state and all of the input changes. A video game, a flight simulator, and a trading system all work the same way. The state changes in a typical laptop about 4 billion times per second. It may actually take many of these 250 picosecond clock cycles to result in an observable difference in the output of the program, such as the movement of your avatar on the screen. Within the big complex laptop finite state machines are many others running, such as each of those dozens or hundreds of processes that you see running when you click on your “activity monitor.” And within each of those FSMs are many others, such as the method (or “sub program”) that is invoked when it is necessary to generate the appearance of a new object on the screen.

There is also a concept in computer science called an “instance.” It is similar to the idea of a template. As an analogy, consider the automobile. Every Honda that rolls off the assembly line is different, even if it is the same model with the same color and same set of options. The reason it is different from another with the exact same specifications is that there are microscopic differences in every part that goes into each car. In fact, there are differences in the way that every part is connected between two cars of equal specifications. However, imagine if every car were exactly the same, down to the molecule, atom, particle, string, or what have you. Then we could say that each car is an instance of its template.

This would also be the case in a computer-based virtual reality. Every similar car generated in the computer program is an instance of the computer model of that car, which, by the way, is a finite state machine. Each instance can be given different attributes, however, such as color, loudness, or power. In some cases, such as a virtual racing game where the idea of a car is central to the game, each car may be rather unique in the way that it behaves, or responds to the inputs from the controller, so there may be many different FSMs for these different types of cars. However, for any program, there will be FSMs that are so fundamental that there only needs to be one of that type of object; for example, a leaf.

In our programmatic reality (what I like to call the Reality Learning Lab, or RLL), there are also FSMs that are so fundamental that there only needs to be one FSM for that type of object. And every object of that type is merely an instance of that FSM. Such as an electron.

An electron is fundamental. It is a perfect example of an object that should be modeled by a finite state machine. There is no reason for any two electrons to have different rules of behavior. They may have different starting conditions and different influences throughout their lifetime, but they would react to those conditions and influences with exactly the same rules. Digital Consciousness Theory provides the perfect explanation for this. Electrons are simply instances of the electron finite state machine. There is only one FSM for the electron, just as Wheeler suspected. But there are many instances of it. Each RLL clock cycle will result in the update of the state of each electron instance in our apparent physical reality.

So, in a very real sense, Wheeler was right. There is no need for anything other than the single electron FSM. All of the electrons that we experience are just instances and follow exactly the same rules. Anything else would be inefficient, and ATTI is the ultimate in efficiency.

 

Nick Bostrom Elon Musk Nick Bostrom Elon Musk

OMG can anyone write an article on the simulation hypothesis without focusing on Nick Bostrom and Elon Musk? It’s like writing an article about climate change and only mentioning Al Gore.

Dear journalists who are trying to be edgy and write about cool fringe theories, please pay attention. The idea that we might be living in an illusory world is not novel. Chinese philosopher Zhuangzi wrote about it with his butterfly dream in 369 BC. Plato discussed his cave allegory in 380 BC. The other aspect of simulation theory, the idea that the world is discrete or digital, is equally ancient. Plato and Democritous considered atoms, and therefore the fundamental constructs of reality, to be discrete.

I’m not taking anything away from Nick Bostrom, who is a very intelligent modern philosopher. His 2001 Simulation Argument is certainly thought provoking and deserves its place in the annals of digital philosophy. But it was predated by “The Matrix”. Which was predated by Philip K. Dick’s pronouncement in 1977 that we might be living in a computer-programmed reality. Which was predated by Konrad Zuse’s 1969 work on discrete reality, “Calculating Space.”

And as interesting as Bostrom’s Simulation Argument is, it was a 12-page paper on a single idea. Since then, he has not really evolved his thinking on digital philosophy, preferring instead to concentrate on existential risk and the future of humanity.

Nor am I taking anything away from Elon Musk, a brilliant entrepreneur who latched onto Bostrom’s idea for a few minutes, generated a couple sound bites, and then it was back to solar panels and hyperloops.

But Bostrom, Musk, and the tired old posthuman-generated simulation hypothesis is all that the rank and file of journalists seem to know to write about. It is really sad, considering that Tom Campbell wrote an 800-page treatise on the computational nature of reality. I have written two books on the subject. Both of our material is largely consistent and has evolved the thinking far beyond the idea that we live in a posthuman-generated simulation. In fact, I provide a great deal of evidence that the Bostrom-esque possibility is actually not very likely. And Brian Whitworth has a 10-year legacy of provocative scientific papers on evidence for a programmed reality that are far beyond the speculations of Musk and Bostrom.

The world need to know about these things and Campbell, Whitworth, and I can’t force people to read our books, blogs, and papers. So journalists, with all due respect, please up your simulation game.

Dolly, Jaws, and Braces – The Latest Mandela Effect

Well, the universe is at it again, messing with our minds. Last year, I wrote a blog about the Berenstein Bears, which at that time was the most recent example of a Mandela Effect. The Mandela Effect seems to be the de facto name for the idea that something that many people remember from the past is somehow changed, or rewritten. It was named for former president of South Africa, Nelson Mandela, whom many people recall having died in a South African prison, which, history now tells us, is untrue. He died, according to all of the historical artifacts in our reality, of natural causes at the ripe old age of 95. I personally have a vague recollection of hearing some news about his demise in prison, but I can’t really place it.

That’s the thing about memories; they are completely fallible. When one remembers something, according to research, one is not remembering the original event, but rather the last time that you recalled that particular memory. As such, memories are subject to the “whisper down the lane” syndrome of changing slightly with every recollection. So, my vague Mandela recollection could easily have morphed from a confluence of news reports and “Mandela Effect” claims that I have heard over the years.

However, that does not at all explain why large numbers of people would have the same memory of something entirely fallacious. Which brings me back to the latest of this genre of anomalies: Did Dolly Have Braces?

The 1979 James Bond film Moonraker featured a character named Jaws, a huge henchman with metal teeth played by the late Richard Kiel. In one scene, Jaws’ Brazilian cable car crashes and he is helped out of the rubble by Dolly, a bespectacled young blonde woman played by the French actress Blanche Ravalec. There is one of those movie moments that any Bond aficionado will recall, when Jaws first looks at Dolly and grins, bearing his mouthful of metal. She looks at him and grins, showing her mouthful of metal braces, and therefore, as the music swells, they fall instantly in love and walk off hand in hand. At least that’s the way we all remember it, myself included. The only problem is that if you watch the scene today, Dolly has no braces!

jaws3  dollynobraces

Those 70s era Bond movies were full of campy moments like this one. It was done to make the audience chuckle – in this case: “ahhh, despite their drastically different looks, they fall in love with each other, because of the braces connection” – and everyone laughs. That was the entire point. But now, the scene simply doesn’t even make sense any more. This is actually a key difference from the Berenstein Bears (I refuse to spell it any other way) Mandela effect. In that one, there was no real corroborating evidence that it ever was “Berenstein” with the exception of all of our fallible memories. In contrast, the Dolly, Jaws, and Braces scenario does have separate corroborating evidence that it was once as we remember it – the very point of the scene itself. In addition, I dug out a 2014 BBC obituary of Richard Kiel that references the movie describing Dolly as “a small, pig-tailed blonde with braces.” I’m sure the BBC checks their facts fairly carefully and wouldn’t typically be subject to mass delusion. Also, someone on Reddit managed to find an image somewhere where Dolly still appears to have braces, but you have to look closely:

dollywithbraces

So, here, it seems, the universe (ATTI, all that there is) is really messing with us, and didn’t even bother to clean up all of the artifacts.

First, a quick comment on the word “universe” – the underlying “real” universe is what i call ATTI (all that there is) to distinguish it from the physical universe that we know and love, but which is actually virtual. This virtual world is all a subjective experience of our true consciousness, which sits somewhere as part of ATTI. Hence ATTI can modify our virtual world, as could another conscious entity within ATTI (who perhaps has an evolved level of access). I’m not sure which of these is messing with the historical artifacts, but either is very possible. It would be analogous to being a programmer of a multi-player virtual reality fantasy game, and deciding to go back into the game and replace all of the pine trees with palm trees. The players would certainly notice, but they would think that there was a patch applied to the game for some reason and wouldn’t really give it a second though because they realize the game is virtual. The only reason the Mandela effect freaks us out when we discover one, like Dolly’s braces, is because we don’t realize our reality is virtual.

As I post this, it feels like I am documenting something significant.  However, I realize that tomorrow, this post may be gone.  Or perhaps the references that I listed to Dolly with braces will have disappeared, and along with them, the original sources.  And closed-minded science snobs like Bill Nye and Neil deGrasse Tyson will say it always was that way.

Note: I sometimes make a few changes to these blog posts when I realize that I can be more clear about something.  So if you notice something different the second time you read it, it probably isn’t because of the Mandela effect (but it could be 🙂 ).  Also, for those who haven’t read my original blog on this effect, I will repeat the explanation for Dolly, courtesy of digital consciousness theory:

The flaw is in the assumption that “we” are all in the same reality. “We,” as has been discussed countless times in this blog and in my book, are experiencing a purely subjective experience. It is the high degree of consensus between each of us “conscious entities” that fools us into thinking that our reality is objective and deterministic. Physics experiments have proven beyond a reasonable doubt that it is not.

So what is going on?

My own theory, Digital Consciousness (fka “Programmed Reality”), has a much better, comprehensive, and perfectly consistent explanation (note: this has the same foundation as Tom Campbell’s theory, “My Big TOE”). See the figure below.

ATTI

“We” are each a segment of organized information in “all that there is” (ATTI). Hence, we feel individual, but are connected to the whole. (No time to dive into how perfectly this syncs with virtually every spiritual experience throughout history, but you probably get it.) The “Reality Learning Lab” (RLL) (Campbell) is a different set of organized information within ATTI. The RLL is what we experience every day while conscious. (While meditating, or in deep sleep, we are connected elsewhere) It is where all of the artifacts representing Jaws and Dolly exist. It is where various “simulation” timelines run. The information that represents our memories is in three places:

  1. The “brain” part of the simulation. Think of this as our cache.
  2. The temporary part of our soul’s record (or use the term “spirit”, “essence”, “consciousness”, “Being”, or whatever you prefer – words don’t matter), which we lose when we die. This is the stuff our “brain” has full access to, especially when our minds are quiet.
  3. The permanent part of our soul’s record; what we retain from life to life, what we are here to evolve and improve, what in turn contributes to the inexorable evolution of ATTI. Values and morality are here. Irrelevant details like whether or not Dolly had braces don’t belong.

For some reason, ATTI decided that it made sense to remove Dolly’s braces in all of the artifacts of our reality (DVDs, YouTube clips, etc.) But, for some reason, the consciousness data stores did not get rewritten when that happened, and so we still have long-term recollection of Dolly with braces.

Why? ATTI just messing with us? Random experiment? Glitch?

Maybe ATTI is giving us subtle hints that it exists, that “we” are permanent, so that we use the information to correct our path?

We can’t know. ATTI is way beyond our comprehension.