Will Evolving Minds Delay The AI Apocalypse? – Part II

The idea of an AI-driven Apocalypse is based on AI outpacing humanity in intelligence. The point at which that might happen depends on how fast AI evolves and how fast (or slow) humanity evolves.

In Part I of this article, I demonstrated how, given current trends in the advancement of Artificial Intelligence, any AI Apocalypse, Singularity, or what have you, is probably much further out that the transhumanists would have you believe.

In this part, we will examine the other half of the argument by considering the nature of the human mind and how it evolves. To do so, it is very instructive to consider the nature of the mind as a complex system and also the systemic nature of the environments that minds and AIs engage with, and are therefore measured by in terms of general intelligence or AGI.

David Snowden has developed a framework of categorizing systems called Cynefin. The four types of systems are:

  1. Simple – e.g. a bicycle. A Simple system is a simple deterministic system characterized by the fact that most anyone can make decisions and solve problems regarding such systems – it takes something called inferential intuition, which we all have. If the bicycle seat is loose, everyone knows that to fix it, you must look under the seat and find the hardware that needs tightening.
  2. Complicated – e.g. a car. Complicated systems are also deterministic systems, but unlike Simple systems, solutions to problems in this domain are not obvious and typically require analysis and/or experts to figure out what is wrong. That’s why you take your car to the mechanic and why we need software engineers to fix defects.
  3. Complex – Complex systems, while perhaps deterministic from a philosophical point of view, are not deterministic in any practical sense. No matter how much analysis you apply and no matter how experienced the expert is, they will not be able to completely analyze and solve a problem in a complex system. That is because such systems are subject to an incredibly complex set of interactions, inputs, dependencies, and feedback paths that all change continuously. So even if you could apply sufficient resources toward analyzing the entire system, by the time you got your result, your problem state would be obsolete. Examples of complex systems include ecosystems, traffic patterns, the stock market, and basically every single human interaction. Complex systems are best addressed through holistic intuition, which is something that humans possess when they are very experienced in the applicable domain. Problems in complex systems are best addressed by a method called Probe-Sense-Respond, which consists of probing (doing an experiment designed intuitively), sensing (observing the results of that experiment), and responding (acting on those results by moving the system in a positive direction).
  4. Chaotic – Chaotic systems are rarely occurring situations that are unpredictable because they are novel and therefore don’t follow any known patterns. An example would be the situation in New York City after 9/11. Responding to chaotic systems requires an even different method than with other types of systems. Typically, just taking some definitive form of action may be enough to move the system from Chaotic to Complex. The choice of action is a deeply intuitive decision that may be based on an incredibly deep, rich, and nuanced set of knowledge and experiences.

Complicated systems are ideal for early AI. Problems like the ones analyzed in Stanford’s AI Index, such as object detection, natural language parsing, language translation, speech recognition, theorem proving, and SAT solving are all Complicated systems. AI technology at the moment is focused mostly on such problems, not things in the Complex domain, which are instead best addressed by the human brain. However, as processing speed evolves, and learning algorithms evolve, AI will start addressing issues in the Complex domain. Initially, to program or guide the AI systems toward a good sense-and-respond model a human mind will be needed. Eventually perhaps, armed with vague instructions like “try intuitive experiments from a large set of creative ideas that may address the issue,” “figure out how to identify the metrics that indicate a positive result from the experiment,” “measure those metrics,” and “choose a course of action that furthers the positive direction of the quality of the system,” an AI may succeed at addressing problems in the Complex domain.

The human mind of course already has a huge head start. We are incredibly adept at seeing vague patterns, sensing the non-obvious, seeing the big picture, and drawing from collective experiences to select experiments to address complex problems.

Back to our original question, as we lead AI toward developing the skills and intuition to replicate such capabilities, will we be unable to evolve our thinking as well?

In the materialist paradigm, the brain is the limit for an evolving mind. This is why we think the AI can out evolve us, because the brain capacity is fixed. However, in “Digital Consciousness” I have presented a tremendous set of evidence that this is incorrect. In actuality, consciousness, and therefore the mind, is not emergent from the brain. Instead it exists in a deeper level of reality as shown in the Figure below.

It interacts with a separate piece of ATTI that I call the Reality Learning Lab (RLL), commonly known as “the reality we live in,” but more accurately described as our “apparent physical reality” – “apparent” because it is actually Virtual.

As discussed in my blog on creating souls, All That There Is (ATTI) has subdivided itself into components of individuated consciousness, each of which has a purpose to evolve. How it is constructed, and how the boundaries are formed that make it individuated is beyond our knowledge (at the moment).

So what then is our mind?

Simply put, it is organized information. As Tom Campbell eloquently expressed it, “The digital world, which subsumes the virtual physical world, consists only of organization – nothing else. Reality is organized bits.”

As such, what prevents it from evolving in the deeper reality of ATTI just as fast as we can evolve an AI here in the virtual reality of RLL?

Answer – NOTHING!

Don’t get hung up on the fixed complexity of the brain. All our brain is needed for is to emulate the processing mechanism that appears to handle sensory input and mental activity. By analogy, we might consider playing a virtual reality game. In this game we have an avatar and we need to interact with other players. Imagine that a key aspect of the game is the ability to throw a spear at a monster or to shoot an enemy. In our (apparent) physical reality, we would need an arm and a hand to be able to carry out that activity. But in the game, it is technically not required. Our avatar could be arm-less and when we have the need to throw something, we simply press a key sequence on the keyboard. A spear magically appears and gets hurled in the direction of the monster. Just as we don’t need a brain to be aware in our waking reality (because our consciousness is separate from RLL), we don’t need an arm to project a spear toward an enemy in the VR game.

On the other hand, having the arm on the avatar adds a great deal to the experience. For one thing, it adds complexity and meaning to the game. Pressing a key sequence does not have a lot of variability and it certainly doesn’t provide the player with much control. The ability to hit the target could be very precise, such as in the case where you simply point at the target and hit the key sequence. This is boring, requires little skill and ultimately provides no opportunity to develop a skill. On the other hand, the precision of your attack could be dependent on a random number generator, which adds complexity and variability to the game, but still doesn’t provide any opportunity to improve. Or, the precision of the attack could depend on some other nuance of the game, like secondary key sequences, or timing of key sequences, which, although providing the opportunity to develop a skill, have nothing to do with a consistent approach to throwing something. So, it is much better to have your avatar have an arm. In addition, this simply models the reality that you know, and people are comfortable with things that are familiar.

So it is with our brains. In our virtual world, the digital template that is our brain is incapable of doing anything in the “simulation” that it isn’t designed to do. The digital simulation that is the RLL must follow the rules of RLL physics much the way a “physics engine” provides the rules of RLL physics for a computer game. And these rules extend to brain function. Imagine if, in the 21st century, we had no scientific explanation for how we process sensory input or make mental decisions because there was no brain in our bodies. Would that be a “reality” that we could believe in? So, in our level of reality that we call waking reality, we need a brain.

But that brain “template” doesn’t limit the ability for our mind to evolve any more than the lack of brain or central nervous system prevents a collection of single celled organisms called a slime mold from actually learning.

In fact, there is some good evidence for the idea that our minds are evolving as rapidly as technology. Spiral Dynamics is a model of the evolution of values and culture that can be applied to individuals, institutions, and all of humanity. The figure below depicts a very high level overview of the stages, or memes, depicted by the model.

Spiral Dynamics

Each of these stages represents a shift in values, culture, and thinking, as compared to the previous. Given that it is the human mind that drives these changes, it is fair to say that the progression models the evolution of the human mind. As can be seen by the timeframes associated with the first appearance of each stage of humanity, this is an exponential progression. In fact, this is the same kind of progression that Transhumanists used to prove exponential progression of technology and AI. This exponential progression of mind would seem to defy the logic that our minds, if based on fixed neurological wiring, are incapable of exponential development.

And so, higher level conscious thought and logic can easily evolve in the human mind in the truer reality, which may very well keep us ahead of the AI that we are creating in our little virtual reality. The trick is in letting go of our limiting assumptions that it cannot be done, and developing protocols for mental evolution.

So, maybe hold off on buying those front row tickets to the Singularity.

Collapsing the Objective Collapse Theory

When I was a kid, I liked to collect things – coins, baseball cards, leaves, 45s, what have you. What made the category of collectible particularly enjoyable was the size and variety of the sample space. In my adult years, I’ve learned that collections have a downside – where to put everything? – especially as I continue to downsize my living space in trade for more fun locales, greater views, and better access to beaches, mountains, and wine bars. However, I do still sometimes maintain a collection, such as my collection of other people’s theories that attempt to explain quantum mechanics anomalies without letting go of objective materialism. Yeah, I know, not the most mainstream of collections, and certainly nothing I can sell on eBay, but way more fun than stamps.

The latest in this collection is a set of theories called “objective collapse” theories. These theories try to distance themselves from the ickyness (to materialists) of conscious observer-centric theories like the Copenhagen interpretation of quantum mechanics. They also attempt to avoid the ridiculousness of the exponentially explosive reality creation theories in the Many Worlds Interpretations (MWI) category. Essentially, the Objective Collapsers argue that there is a wave function describing the probabilities of properties of objects, but, rather than collapsing due to a measurement or a conscious observation, it collapses on its own due to some as yet undetermined, yet deterministic, process according to probabilities of the wave function.

Huh?

Yeah, I call BS on that. And point simply to the verification of the Quantum Zeno effect.  Particles don’t change state while they are under observation. When you stop observing them, then they change state, not at some random time prior, as the Objective Collapse theories would imply, but at the exact time that you stop observing them. In other words, the timing of the observation is correlated with wave function collapse, completely undermining the argument that it is probabilistic or deterministic according to some hidden variables. Other better-physics-educated individuals than I (aka physicists) have also called BS on Objective Collapse theories due to other things such as the conservation of energy violations. But, of course there is no shortage of physicists calling BS on other physicists’ theories. That, by itself, would make an entertaining collection.

In any case, I would be remiss if I didn’t remind the readers that the Digital Consciousness Theory completely explains all of this stuff. By “stuff,” I mean not just the anomalies, like the quantum zeno effect, entanglement, macroscopic coherence, the observer effect, and quantum retrocausality, but also the debates about microscopic vs. macroscopic, and thought experiments like the time that Einstein asked Abraham Pais whether he really believed that the moon existed only when looked at, to wit:

  • All we can know for sure is what we experience, which is subjective for every individual.
  • We effectively live in a virtual reality, operating in the context of a huge and highly complex digital substrate system. The purpose of this reality is for our individual consciousnesses to learn and evolve and contribute to the greater all-encompassing consciousness.
  • The reason that it feels “physical” or solid and not virtual is due to the consensus of experience that is built into the system.
  • This virtual reality is influenced and/or created by the conscious entities that occupy it (or “live in it” or “play in it”; chose your metaphor)
  • The virtual reality may have started prior to any virtual life developing, or it may have been suddenly spawned and initiated with us avatars representing the various life forms at any point in the past.
  • Some things in the reality need to be there to start; the universe, earth, water, air, and, in the case of the more recent invocation of reality, lots of other stuff. These things may easily be represented in a macroscopic way, because that is all that is needed in the system for the experience. Therefore, there is no need for us to create them.
  • However, other things are not necessary for our high level experience. But they are necessary once we probe the nature of reality, or if we aim to influence our reality. These are the things that are subject to the observer effect. They don’t exist until needed. Subatomic particles and their properties are perfect examples. As are the deep cause and effect relationships between reality elements that are necessary to create the changes that our intent is invoked to bring about.

So there is no need for objective collapse. Things are either fixed (the moon) or potential (the radioactive decay of a particle). The latter are called into existence as needed…

…Maybe

cat

Which came first, the digital chicken, or the digital philosophy egg?

Many scientists, mathematicians, futurists, and philosophers are embracing the idea that our reality is digital these days. In fact, it would be perfectly understandable to wonder if digital philosophy itself is tainted due to the tendency of humans to view ideas through the lens of their times. We live in a digital age, surrounded by computers, the Internet, and smart phones, and so might we not be guilty of imagining that the world behaves just as a multi-player video game does? We probably wouldn’t have had such ideas 50 years ago, when, at a macroscopic level at least, everything with which we interacted appeared analog and continuous. Which came first, the digital chicken, or the digital philosophy egg?

Actually, the concepts of binary and digital are not at all new. The I Ching is an ancient Chinese text that dates to 1150 BCE. In it are 64 combinations of 8 trigrams (aka the Bagua), each of which clearly contain the first three bits of a binary code. 547px-Bagua-name-earlier.svg

Many other cultures, including the Mangareva in Polynesia (1450), and Indian (5th to 2nd century BCE), have used binary encodings for communication for thousands of years. Over 12,000 years ago, African tribes developed a binary divination system called Odu Ifa.

German mathematician and philosopher Gottfried Leibniz is generally credited as developing the modern binary number system in 1679, based on zeros and ones. Naturally, all of these other cultures are ignored so that we can maintain the illusion that all great philosophical and mathematical thought originated in Europe. Regardless of Eurocentric biases, it is clear that binary encoding is not a new concept. But what about applying it to the fundamental construct of reality?

It turns out that while modern digital physics or digital philosophy references are replete with sources that only date to the mid-20th century, the ancient Greeks (namely Plato) believed that reality was discrete. Atoms were considered to be discrete and fundamental components of reality.

A quick clarification of the terms “discrete”, “digital”, “binary”, “analog”, and “continuous” is probably in order:

Discrete – Having distinct points of measurement in the time domain

Digital – Having properties that can be encoded into bits

Binary – Encoding that is done with only two digits, zeros and ones

Analog – Having continuously variable properties

Continuous – The time domain is continuous

So, for example, if we encode the value of some property (e.g. length or voltage) digitally using 3 values (0, 1, 2), that would be digital, but not binary (rather, ternery). If we say that between any two points in time, there is an infinitely divisible time element, but for each point, the value of the measurement being performed on some property is represented by bits, then we would have a continuous yet digital system. Conversely, if time can be broken into chunks such that at a fine enough temporal granularity there is no concept of time between two adjacent points in time, but at each of these time points, the value of the measurement being performed is continuously variable, then we would have a discrete yet analog system.

In the realm of consciousness-driven digital philosophy, it is my contention that the evidence strongly supports reality being discrete and digital; that is, time moves on in “chunks” and at each discrete point in time, every property of everything can be perfectly represented digitally. There are no infinities.

I believe that this is a logical and fundamental conclusion, regardless of the fact that we live in a digital age. There are many reasons for this, but for the purposes of this particular blog post, I shall only concentrate on a couple. Let’s break down the possibilities of our reality, in terms of origin and behavior:

  1. Type 1 – Our reality was created by some conscious entity and has been following the original rules established by that entity. Of course, we could spend a lifetime defining “conscious” or “entity” but let’s try to keep it simple. This scenario could include traditional religious origin theories (e.g. God created the heavens and the earth). It could also include the common simulation scenarios, a la Nick Bostrom’s “Simulation Argument.”
  1. Type 2 – Our reality was originally created by some conscious entity and has been evolving according to some sort of fundamental evolutionary law ever since.
  1. Type 3 – Our reality was not created by some conscious entity, and its existence sprang out of nothing and has been following primordial rules of physics ever since. To explain the fact that our universe is incredibly finely-tuned for matter and life, materialist cosmologists dreamt up the idea that we must exist in an infinite set of parallel universes, and via the anthropic principle, the one we live only appears finely-tuned because it has to in order for us to be in it. Occam would be turning over in his grave.
  1. Type 4 – Our reality was not created by some particular conscious entity, but rather has been evolving according to some sort of fundamental evolutionary law from the very beginning.

I would argue that in the first two cases, reality would have to be digital. For, if a conscious entity is going to create a world for us to live in and experience, that conscious entity is clearly highly evolved compared to us. And, being so evolved, it would certainly make use of the most efficient means to create a reality. A continuous reality is not only inefficient, it is theoretically impossible to create because it involves infinities in the temporal domain as well as any spatial domain or property.

pixelated200I would also argue that in the fourth case, reality would have to be digital for similar reasons. Even without a conscious entity as a creator, the fundamental evolutionary law would certainly favor a perfectly functional reality that doesn’t require infinite resources.

Only in the third case above, would there be any possibility of a continuous analog reality. Even then, it is not required. As MIT cosmologist and mathematician Max Tegmark succinctly put it, “We’ve never measured anything in physics to more than about sixteen significant digits, and no experiment has been carried out whose outcome depends on the hypothesis that a true continuum exists, or hinges on nature computing something uncomputable.” Hence there is no reason to assume, a priori, that the world is continuous. In fact, the evidence points to the contrary:

  • Infinite resolution would imply that matter implodes into black holes at sub-Planck scales and we don’t observe that.
  • Infinite resolution implies that relativity and quantum mechanics can’t coexist, at least with the best physics that we have today. Our favorite contenders for rationalizing relativity and quantum mechanics are string theory and loop quantum gravity. And they only work with minimal length (aka discrete) scales.
  • We actually observe discrete behavior in quantum mechanics. For example, a particle’s spin value is always quantized; there are no intermediate states. This is anomalous in continuous space-time.

For many other reasons, as are probably clear from the evidence compiled on this site, I tend to favor reality Type 4. No other type of reality structure and origin can be shown to be anywhere near as consistent with all of the evidence (philosophical, cosmological, mathematical, metaphysical, and experimental). And it has nothing to do with MMORPGs or the smart phone in my pocket.

Quantum Zeno Effect Solved

Lurking amidst the mass chaos of information that exists in our reality is a little gem of a concept called the Quantum Zeno Effect.  It is partially named after ancient Greek philosopher Zeno of Elea, who dreamed up a number of paradoxes about the fluidity of motion and change.  For example, the “Arrow Paradox” explores the idea that if you break down time into “instants” of zero duration, motion cannot be observed.  Thus, since time is composed of a set of instants, motion doesn’t truly exist.  We might consider Zeno to have been far ahead of his time as he appeared to be thinking about discrete systems and challenging the continuity of space and time a couple thousand years before Alan Turing resurrected the idea in relation to quantum mechanics: “It is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, one second, tends to one as N tends to infinity; that is, that continual observations will prevent motion …”.  The term “Quantum Zeno Effect” was first used by physicists George Sudarshan and Baidyanath Misra in 1977 to describe just such a system – one that does not change state because it is continuously observed.

The challenge with this theory has been in devising experiments that can verify or falsify it.  However, technology has caught up to philosophy and, over the last 25 years, a number of experiments have been performed which seem to validate the effect.  In 2001, for example, physicist Mark Raizen and a team at the University of Texas showed that the effect is indeed real and the transition of states in a system can be either slowed down or sped up simply by taking measurements of the system.

I have enjoyed making a hobby of fully explaining quantum mechanics anomalies with the programmed reality theory.   Admittedly, I don’t always fully grasp some of the deep complexities and nuances of the issues that I am tackling, due partly to the fact that I have a full time job that has naught to do with this stuff, and partly to the fact that my math skills are a bit rusty, but thus far, it doesn’t seem to make a difference.  The more I dig in to each issue, the more I find things that simply support the idea that we live in a digital (and programmed) reality.

The quantum Zeno effect might not be observed in every case.  It only works for non-memoryless processes.  Exponential decay, for instance, is an example of a memoryless system.  Frequent observation of a particle undergoing radioactive decay would not affect the result.  [As an aside, I find it very interesting that a “memoryless system” invokes the idea of a programmatic construct.  Perhaps with good reason…]

A system with memory, or “state”, however, is, in theory, subject to the quantum Zeno effect.  It will manifest itself by appearing to reset the experiment clock every time an observation is made of the state of the system.  The system under test will have a characteristic set of changes that vary over time.  In the case of the University of Texas experiment, trapped ions tended to remain in their initial state for a brief interval or so before beginning to change state via quantum tunneling, according to some probability function.  For the sake of developing a clear illustration, let’s imagine a process whereby a particle remains in its initial quantum state (let’s call it State A) for 2 seconds before probabilistically decaying to its final state (B) according to a linear function over the next second.  Figure A shows the probability of finding the particle in State A as a function of time.  For the first 2 seconds, of course, it has a 0% probability of changing state, and between 2 and 3 seconds it has an equal probability of moving to state B at any point in time.  A system with this behavior, left on its own and measured at any point after 3 seconds, will be in State B.

probability

What happens, however, when you make a measurement of that system, to check and see if it changed state, at t=1 second?  Per the quantum Zeno effect, the experiment clock will effectively be reset and now the system will stay in State A from t=1 to t=3 and then move to state B at some point between t=3 and t=4.  If you make another measurement of the system at t=1, the clock will again reset, delaying the behavior by another second.  In fact, if you continue to measure the state of the system every second, it will never change state.  Note that this has absolutely nothing to do with the physical impact of the measurement itself; a 100% non-intrusive observation will have exactly the same result.

Also note that, it isn’t that the clock doesn’t reset for a memoryless system, but rather, that it doesn’t matter because you cannot observe any difference.  One may argue that if you make observations at the Planck frequency (one per jiffy), even a memoryless sytem might never change state.  This actually approaches the true nature of Zeno’s arguments, but that is a topic for another essay, one that is much more philosophical than falsifiable.  In fact, “Quantum Zeno Effect” is a misnomer.  The non-memoryless system described above really has little to do with the ad infinitum inspection of Zeno’s paradoxes, but we are stuck with the name.  And I digress.

So why would this happen?

It appears to be related in some way to the observer effect and to entanglement:

  • Observer Effect – Once observed, the state of a system changes.
  • Entanglement – Once observed, the states of multiple particles (or, rather, the state of a system of multiple particles) are forever connected.
  • Quantum Zeno – Once observed, the state of a system is reset.

What is common to all three of these apparent quantum anomalies is the coupling of the act of observation with the concept of a state.  For the purposes of this discussion, it will be useful to invoke the computational concept of a finite state machine, which is a system that changes state according to a set of logic rules and some input criteria.

I have explained the Observer effect and Entanglement as logical necessities of an efficient programmed reality system.  What about Quantum Zeno?  Why would it not be just as efficient to start the clock on a process and let it run, independent of observation?

A clue to the answer is that the act of observation appears to create something.

In the Observer effect, it creates the collapse of the probability wave functions and the establishment of definitive properties of certain aspects of the system under observation (e.g. position).  This is not so much a matter of efficiency as it is of necessity, because without probability, free will doesn’t exist and without free will, we can’t learn, and if the purpose of our system is to grow and evolve, then by necessity, observation must collapse probability.

In Entanglement, the act of observation may create the initiation of a state machine, which subsequently determines the behavior of the particles under test.  Those particles are just data, as I have shown, and the data elements are part of the same variable space of the state machine.  They both get updated simultaneously, regardless of the “virtual” distance between them.

So, in Quantum Zeno, the system under test is in probability space.  The act of observation “collapses” this initial probability function and kicks off the mathematical process by which futures states are determined based on the programmed probability function.  But that is now a second level of probability function; call it probability function 2.  Observing this system a second time now must collapse the probability wave function 2.  But to do so means that the system would now have to calculate a modified probability function 3 going forward – one that takes into account the fact that some aspect of the state machine has already been determined (e.g. the system has or hasn’t started its decay).  For non-memoryless systems, this could be an arbitrarily complex function (3) since it may take a different shape for every time at which the observation occurs.  A third measurement complicates the function even further because even more states are ruled out.

On the other hand, it would be more efficient to simply reset the probability function each time an observation is made, due to the efficiency of the reality system.

The only drawback to this algorithm is the fact that smart scientists are starting to notice these little anomalies, although the assumption here is that the reality system “cares.”  It may not.  Or perhaps that is why most natural processes are exponential, or memoryless – it is a further efficiency of the system.  Man-made experiments, however, don’t follow the natural process and may be designed to be arbitrarily complex, which ironically serves to give us this tiny little glimpse into the true nature of reality.

What we are doing here is inferring deep truths about our reality that are in fundamental conflict with the standard materialist view.  This will be happening more and more as time goes forward and physicists and philosophers will soon have no choice but to consider programmed reality as their ToE.

matrix-stoppingreality

Flexi Matter

Earlier this year, a team of scientists at the Max Planck Institute of Quantum Optics, led by Randolf Pohl, made a highly accurate calculation of the diameter of a proton and, at .841 fm, it turned out to be 4% less than previously determined (.877 fm).  Trouble is, the previous measurements were also highly accurate.  The significant difference between the two types of measurement was the choice of interaction particle: in the traditional case, electrons, and in Pohl’s case, muons.

Figures have been checked and rechecked and both types of measurements are solid.  All sorts of crazy explanations have been offered up for the discrepancy, but one thing seems certain: we they don’t really understand matter.

Ancient Greeks thought that atoms were indivisible (hence, the name), at least until Rutherford showed otherwise in the early 1900s.  Ancient 20th-century scientists thought that protons were indivisible, at least until Gell-Mann showed otherwise in the 1960s.

So why would it be such a surprise that the diameter of a proton varies with the type of lepton cloud that surrounds and passes through it?  Maybe the proton is flexible, like a sponge, and a muon, at 200 times the weight of an electron, exerts a much higher contractive force on it – gravity, strong nuclear, Jedi, or what have you.  Just make the measurements and modify your theory, guys.  You’ll be .000001% closer to the truth, enough to warrant an even bigger publicly funded particle accelerator.

If particle sizes and masses aren’t invariant, who is to say that they don’t change over time.  Cosmologist Christof Wetterich of the University of Heidelberg thinks this might be possible.  In fact, says Wetterich, if particles are slowly increasing in size, the universe may not be expanding after all.  His recent paper suggests that spectral red shift, Hubble’s famous discovery at Mount Wilson, that led the most widely accepted theory of the universe – the big bang, may actually be due to changing particle sizes over time.  So far, no one has been able to shoot a hole in his theory.

Oops.  “Remember what we said about the big bang being a FACT?  Never mind.”

Flexi-particles.  Now there is both evidence and major philosophical repercussions.

And still, The Universe – Solved! predicts there is no stuff.

The ultimate in flexibility is pure data.

data200

Ever Expanding Horizons

Tribal Era

tribalera200Imagine the human world tens of thousands of years ago.  A tribal community lived together, farming, hunting, trading, and taking care of each other.  There was plenty of land to support the community and as long as there were no strong forces driving them to move, they stayed where they were, content.  As far as they knew, “all that there is” was just that community and the land that was required to sustain it.  We might call this the Tribal Era.

Continental Era

continentalera200But, at some point, for whatever reason – drought, restlessness, desire for a change of scenery – another tribe moved into the first tribe’s territory.  For the first time, that tribe realized that the world was bigger than their little community.  In fact, upon a little further exploration, they realized that the boundaries of “all that there is” just expanded to the continent on which they lived, and there was a plethora of tribes in this new greater community.  The horizon of their reality just reached a new boundary and their community was now a thousand fold larger than before.

Planetary Era

planetaryera200According to researchers, the first evidence of cross-oceanic exploration was about 9000 years ago.  Now, suddenly, this human community may have been subject to an invasion of an entirely different race of people with different languages coming from a place that was previously thought to not exist.  Again, the horizon expands and “all that there is” reaches a new level, one that consists of the entire planet.

Solar Era

The Ancient Greek philosophers and astronomers recognized the existence of other solarera200planets.  Gods were thought to have come from the sun or elsewhere in the heavens, which consisted of a celestial sphere that wasn’t too far out away from the surface of our planet.

Imaginations ran wild as horizons expanded once again.

Galactic Era

galacticera200In 1610, Galileo looked through his telescope and suddenly humanity’s horizon expanded by another level.  Not only did the other planets resemble ours, but it was clear that the sun was the center of the known universe, stars were extremely far away, there were strange distant nebulae that were more than nearby clouds of debris, and the Milky Way consisted of distant stars.  In other worlds, “all that there is” became our galaxy.

Universal Era

universalera200A few centuries later, in 1922, it was time to expand our reality horizon once again, as the 100-inch telescope at Mount Wilson revealed that some of those fuzzy nebulae were actually other galaxies.  The concept of deep space and “Universe” was born and new measurement techniques courtesy of Edwin Hubble showed that “all that there is” was actually billions of times more than previously thought.

Multiversal Era

multiversalera200These expansions of “all that there is” are happening so rapidly now that we are still debating the details about one worldview, while exploring the next, and being introduced to yet another.  Throughout the latter half of the 20th century, a variety of ideas were put forth that expanded our reality horizon to the concept of many (some said infinite) parallel universes.  The standard inflationary big bang theory allowed for multiple Hubble volumes of universes that are theoretically within our same physical space, but unobservable due to the limitations of the speed of light.  Bubble universes, MWI, and many other theories exist but lack any evidence.  In 2003, Max Tegmark framed all of these nicely in his concept of 4 levels of Multiverse.

I sense one of those feelings of acceleration with the respect to the entire concept of expanding horizons, as if our understanding of “all that there is” is growing exponentially.  I was curious to see how exponential it actually was, so I took the liberty of plotting each discrete step in our evolution of awareness of “all that there is” on a logarithmic plot and guess what?

Almost perfectly exponential! (see below)

horizons

Dramatically, the trend points to a new expansion of our horizons in the past 10 years or so.  Could there really be a something beyond a multiverse of infinitely parallel universes?  And has such a concept recently been put forth?

Indeed there is and it has.  And, strangely, it isn’t even something new.  For millennia, the spiritual side of humanity has explored non-physical realities; Shamanism, Heaven, Nirvana, Mystical Experiences, Astral Travel.  Our Western scientific mentality that “nothing can exist that cannot be consistently and reliably reproduced in a lab” has prevented many of us from accepting these notions.  However, there is a new school of thought that is based on logic, scientific studies, and real data (if your mind is open), as well as personal knowledge and experience.  Call it digital physics (Fredkin), digital philosophy, simulation theory (Bostrom), programmed reality (yours truly), or My Big TOE (Campbell).  Tom Campbell and others have taken the step of incorporating into this philosophy the idea of non-material realms.  Which is, in fact, a new expansion of “all that there is.”  While I don’t particularly like the term “dimensional”, I’m not sure that we have a better descriptor.

Interdimensional Era

interdiensionalera200Or maybe we should just call it “All That There Is.”

At least until a few years from now.

Einstein Would Have Loved Programmed Reality

Aren’t we all Albert Einstein fans, in one way or another?  If it isn’t because of his 20th Century revolution in physics (relativity), or his Nobel Prize that led to that other 20th Century revolution (quantum mechanics), or his endless Twainsian witticisms, it’s his underachiever-turned-genius story, or maybe even that crazy head of hair.  For me, it’s his regular-guy sense of humor:

“The hardest thing in the world to understand is the income tax.”

and…

“Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. THAT’S relativity.”

Albert Einstein on a bicycle in Niels Bohr's garden

But, the more I read about Albert and learn about his views on the nature of reality, the more affinity I have with his way of thinking.  He died in 1955, hardly deep enough into the digital age to have had a chance to consider the implications of computing, AI, consciousness, and virtual reality.  Were he alive today, I suspect that he would be a fan of digital physics, digital philosophy, simulism, programmed reality – whatever you want to call it.  Consider these quotes and see if you agree:

“Reality is merely an illusion, albeit a very persistent one.”

“I wished to show that space-time isn’t necessarily something to which one can ascribe a separate existence, independently of the actual objects of physical reality. Physical objects are not in space, but these object are spatially extended. In this way the concept of ’empty space’ loses its meaning.”

As far as the laws of mathematics refer to reality, they are uncertain; and as far as they are certain, they do not refer to reality.”

“A human being is part of a whole, called by us the ‘Universe’ —a part limited in time and space. He experiences himself, his thoughts, and feelings, as something separated from the rest—a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest us. Our task must be to free ourselves from this prison by widening our circles of compassion to embrace all living creatures and the whole of nature in its beauty.”

“Space does not have an independent existence.”

“Hence it is clear that the space of physics is not, in the last analysis, anything given in nature or independent of human thought.  It is a function of our conceptual scheme [mind].”

 “Every one who is seriously involved in the pursuit of science becomes convinced that a spirit is manifest in the laws of the Universe-a spirit vastly superior to that of man, and one in the face of which we with our modest powers must feel humble.”

I can only imagine the insights that Albert would have had into the mysteries of the universe, had he lived well into the computer age.  It would have given him an entirely different perspective on that conundrum that puzzled him throughout his later life – the relationship of consciousness to reality.  And he might have even tossed out the Unified Field Theory that he was forever chasing and settled in on something that looked a little more digital.

 

The Digital Reality Bandwagon

I tend to think that reality is just data.  That the fundamental building blocks of matter and space will ultimately be shown to be bits, nothing more.  Those who have read my book, follow this blog, or my Twitter feed, realize that this has been a cornerstone of my writing since 2006.

Not that I was the first to think of any of this.  Near as I can tell, Phillip K. Dick may deserve that credit, having said “We are living in a computer programmed reality” in 1977, although I am sure that someone can find some Shakespearean reference to digital physics (“O proud software, that simulates in wanton swirl”).

Still, a mere six years ago, it was a lonely space to be in.  The few digital reality luminaries at that time included:

But since then…

– MIT Engineering Professor Seth Lloyd published “Programming the Universe” in 2006, asserting that the universe is a massive quantum computer running a cosmic program.

– Nuclear physicist Thomas Campbell published his excellent unifying theory “My Big TOE” in 2007.

– Brian Whitworth, PhD. authored a paper containing evidence that our reality is programmed: “The emergence of the physical world from information processing”, Quantum Biosystems 2010, 2 (1) 221-249  http://arxiv.org/abs/0801.0337

– University of Maryland physicist, Jim Gates, discovered error-correction codes in the laws of physics. See “Symbols of Power”, Physics World, Vol. 23, No 6, June 2010.

– Fermilab astrophysicist, Craig Hogan, speculated that space is quantized.  This was based on results from GEO600 measurements in 2010.  See: http://www.wired.com/wiredscience/2010/10/holometer-universe-resolution/.  A holometer experiment is being constructed to test: http://holometer.fnal.gov/

– Rich Terrile, director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory, hypothesized that we are living in a simulated reality. http://www.vice.com/read/whoa-dude-are-we-inside-a-computer-right-now-0000329-v19n9

– Physicists Leonard Susskind ad Gerard t’Hooft, developed the holographic black hole physics theory (our universe is digitally encoded on the surface of a black hole).

Even mainstream media outlets are dipping a toe into the water to see what kinds of reactions they get, such as this recent article in New Scientist Magazine: http://www.newscientist.com/article/mg21528840.800-reality-is-everything-made-of-numbers.html

So, today, I feel like I am in really great company and it is fun to watch all of the futurists, philosophers, and scientists jump on the new digital reality bandwagon.  The plus side will include the infusion of new ideas and the resulting synthesis of theory, as well as pushing the boundaries of experimental validation.  The down side will be all of the so-called experts jockeying for position.  In any case, it promises to be a wild ride, one that should last the twenty or so years it will take to create the first full-immersion reality simulation.  Can’t wait.