How to Survive an AI Apocalypse – Part 9: The Stabilization Effect

PREVIOUS: How to Survive an AI Apocalypse – Part 8: Fighting Back

Here’s where it gets fun.

Or goes off the rails, depending on your point of view.

AI meets Digital Philosophy meets Quantum Mechanics meets UFOs.

This entire blog series has been about surviving an AI-based Apocalypse, a very doomsday kind of event. For some experts, this is all but inevitable. You readers may be coming to a similar conclusion.

But haven’t we heard this before? Doomsday prophesies have been around as long as… Keith Richards. The Norse Ragnarök, The Hindu prophecy of the end of times during the current age of Kaliyuga, the Zoroastrian Renovation, and of course, the Christian Armageddon. An ancient Assyrian tablet dated 2800-2500 BCE tells of corruption and unruly teenagers and prophecies that “earth is in its final days; the world is slowly deteriorating into a corrupt society that will only end with its destruction.” Fast forward to the modern era, where the Industrial Revolution was going to lead to the world’s destruction. We have since had the energy crisis, the population crisis, and the doomsday clock ticking down to nuclear armageddon. None of it ever comes to pass.

Is the AI apocalypse more of the same, or is it frighteningly different in some way? This Part 9 of the series will examine such questions and present a startling conclusion that all may be well.

THE NUCLEAR APOCALYPSE

To get a handle on the likelihood of catastrophic end times, let’s take a deep dive into the the specter of a nuclear holocaust.

It’s hard for many of us to appreciate what a frightening time it was in the 1950s, as people built fallout shelters and children regularly executed duck and cover drills in the classrooms.

Often considered to be the most dangerous point of the cold war, the 1962 Cuban Missile Crisis was a standoff between the Soviet Union and the United States involving the deployment of Soviet missiles in Cuba. At one point the US Navy began dropping depth charges to force a nuclear-armed Soviet submarine to surface. The crew on the sub, having had no radio communication with the outside world didn’t know if war was breaking out or not. The captain, Valentin Savitsky, wanted to launch a nuclear weapon, but a unanimous decision among the three top officers was required for launch. Vasily Arkhipov, the second in command, was the sole dissenting vote and even got into an argument with the other two officers. His courage effectively prevented the nuclear war that was likely to result. Thomas S Blanton, later the director of the US National Security Archive called Arkhipov “the man who saved the world.”

But that wasn’t the only time we were a hair’s breadth away from the nuclear apocalypse.

On May 23, 1967, US military commanders issued a high alert due to what appeared to be jammed missile detection radars in Alaska, Greenland, and the UK. Considered to be an act of war, they authorized preparations for war, including the deployment of aircraft armed with nuclear weapons. Fortunately, a NORAD solar forecaster identified the reason for the jammed radar – a massive solar storm.

Then, on the other side of the red curtain, on 26 September 1983, with international tensions still high after the recent Soviet military shoot down of Korean Air Lines Flight 007, a nuclear early-warning system in Moscow reported that 5 ICBMs (intercontinental ballistic missiles) had been launched from the US. Lieutenant colonel Stanislav Petrov was the duty officer at the command center and suspected a false alarm, so he awaited confirmation before reporting, thereby disobeying Soviet protocol. He later said that had he not been on the shift at that time, his colleagues would have reported the missile launch, likely triggering a nuclear war.

In fact, over the years there have been at least 21 nuclear war close calls, any of which could easily led to a nuclear conflagration and the destruction of humanity. The following timeline, courtesy of the Future of Life Institute, shows how many occurred in just the 30-year period from 1958 to 1988.

It kinds of makes you wonder what else could go wrong…

END OF SOCIETY PREDICTED

Another modern age apocalyptic fear was driven by the recognition that exponential growth and limited resources are ultimately incompatible. At the time, the world population was growing exponentially and important resources like oil and arable land were being depleted. The Rockefeller Foundation partnered with the OECD (Organization for Economic Cooperation and Development) to form The Club of Rome, a group of current and former heads of state, scientists, economists, and business leaders to discuss the problem and potential solutions. In 1972, with the support of computational modeling from MIT, they issued their first report, The Limits to Growth, which painted a bleak picture of the world’s future. Some of the predictions (and their ultimate outcomes) follow:

Another source for this scare was the book The Population Bomb by Stanford biologist Paul Ehrlich. He and people like Harvard biologist George Wald also made some dire predictions…

There is actually no end to failed environmental apocalyptic predictions – too many to list. But a brief smattering includes:

  • “Unless we are extremely lucky, everyone will disappear in a cloud of blue steam in 20 years.” (New York Times, 1969)
  • “UN official says rising seas to ‘obliterate nations’ by 2000.” (Associated Press, 1989)
  • “Britain will Be Siberian in less than 20 years” (The Guardian, 2004)
  • “Scientist Predicts a New Ice Age by 21th Century” (Boston Globe, 1970)
  • “NASA scientist says we’re toast. In 5-10 years, the arctic will be ice free.” (Associated Press, 2008)

Y2K

And who could forget this apocalyptic gem…

My intent is not to cherry pick the poor predictions and make fun of them. It is simply that when we are swimming in the sea of impending doom, it is really hard to see the way out. And yet, there does always seem to be a way out. 

Sometimes it is mathematical. For example, there was a mathematical determination of when we would run out of oil based on known supply and rate of usage, perhaps factoring in the trend of increase in rate of usage. But what were not factored into the equation were the counter effects of the rate of new reserves being discovered and the improvements in engine efficiencies. One could argue that in the latter case, the scare achieved its purpose, just as the fear of global warming has resulted in a number of new environmental policies and laws, such as California’s upcoming ban on gasoline powered vehicles in 2035. However, that isn’t always the case. Many natural resources, for instance, seem to actually be increasing in supply. I am not necessarily arguing for something like the abiotic oil theory. However, at the macro level, doesn’t it sometimes feel like a game of civilization, where we are given a set of resources, cause and effect interrelationships, and ability to acquire certain skills. In the video game, when we fail on an apocalyptic level, we simply hit the reset button and start over. But in real life we can’t do that. Yet, doesn’t it seem like the “game makers” always hand us a way out, such as unheard of new technologies that are seemingly suddenly enabled? And it isn’t always human ingenuity that saves us? Sometimes, the right person is on duty at the perfect time against all odds. Sometimes, oil fields magically replenish on their own. Sometimes asteroids strike the most remote place on the planet.

THE STABILIZATION EFFECT

In fact, it seems statistically significant that apocalypses, while seemingly imminent, NEVER really occur. So much so that I decided to model it with a spreadsheet using random number generation (also demonstrating how weak my programming skills have gotten). The intent of the model is to encapsulate the state of humanity on a simple timeline using a parameter called “Mood” for lack of a better term. We start at a point in society that is neither euphoric (the Roaring Twenties) nor disastrous (the Great Depression). As time progresses, events occur that push the Mood in one direction or the other, with a 50/50 chance of either occurring. The assumption in this model is that no matter what the Mood is, it can still get better or worse with equal probability. Each of the following graphs depicts a randomly generated timeline.

On the graph are two thresholds – one of a positive nature, where things seemingly can’t get much better, and one of a negative nature, whereby all it should take is a nudge to send us down the path to disaster. In any of the situations we’ve discussed in this part of the series, when we are on the brink of apocalypse, the statistical likelihood that the situation would improve at that point should not be more than 50/50. If true, running a few simulations shows that an apocalypse is actually fairly likely. Figures 1 and 3 pop over the positive limit and then turn back toward neutral. Figure 2 seems to take off in the positive direction even after passing the limit. Figure 4 hits and goes through the negative limit several times, implying that if our reality actually worked this way, apocalyptic situations would actually be likely.

However, what always seems to happen is that when things get that bad, there is a stabilizing force of some sort. I made an adjustment to my reality model by inserting some negative feedback to model this stabilizing effect. For those unfamiliar with the term, complex systems can have positive or negative feedback loops; often both. Negative feedback tends to bring a system back to a stable state. Examples in the body include the maintenance of body temperature and blood sugar levels. If blood sugar gets too high, the pancreas secretes insulin which chemically reduces the level. When it gets too low, the pancreas secretes glucagon which increases the level. In nature, when the temperature gets high, cloud level increases, which provides the negative feedback needed to reduce the temperature. Positive feedback loops also exist in nature. The runaway greenhouse effect is a classic example.

When I applied the negative feedback to the reality model, all curves tended to stay within the positive and negative limits, as show below.

Doesn’t it feel like this is how our reality works at the most fundamental level? But how likely would it be that every aspect of our reality is subject to negative feedback? And where does that negative feedback come from?

REALITY IS ADAPTIVE

This is how I believe that reality works at its most fundamental level…

Why would that be? Two obvious ideas come to mind.

  1. Natural causes – this would be the viewpoint of reductionist materialist scientists. Heat increase causes ice sheets to melt which creates more water vapor, generating more clouds, reducing the heating effect of the sun. But this does not at all explain why the human condition, and the civilization trends that we’ve discussed in this article, always tend toward neutral.
  2. God – this would be the viewpoint of people whose beliefs are firmly grounded in their religion. God is always intervening to prevent catastrophes. But apparently God doesn’t mind minor catastrophes and plenty of pain and suffering in general. More importantly though, this does not explain dynamic reality generation.

DYNAMIC REALITY GENERATION

Enter Quantum Mechanics.

The Double-slit experiment was first done by Thomas Young back in 1801, and was an attempt to determine if light was composed of particles or waves. A beam of light was projected at a screen with two vertical slits. If light was composed of particles, only two bands of light should be on the phosphorescent screen behind the one with the slits. If wave-based, an interference pattern should result. The wave theory was initially confirmed experimentally, but that was later called into question by Einstein and others. 

The experiment was later done with particles, like electrons, and it was clearly assumed that these would be shown to be hard fixed particles, generating the expected pattern shown on the right.

However, what resulted was an interference pattern, implying that the electrons were actually waves. Thinking that perhaps electrons were interfering with each other, the experiment was modified to shoot one electron at a time. And still the interference pattern slowly build up on the back screen.

To make sense of the interference pattern, experimenters wondered if they could determine which slit each electron went through, so they put a detector before the double list. Et voila, the interference pattern disappeared! It was as if the actual conscious act of observation converted the electrons from waves to particles. The common interpretation was that the electrons actual exist only a probability function and the observation actually snaps them into existence.

It is very much like the old adage that a tree falling in the woods makes no sound unless someone is there to see it. Of course, this idea of putting consciousness as a parameter in the equations of physics generated no end of consternation for the deterministic materialists. They have spent the last twenty years designing experiments to disprove this “Observer Effect” to no avail. Even when the “which way” detector is place after the double slit, the interference pattern disappears. The only tenable conclusion is that reality does not exist in an objective manner and its instantiation depends on something. But what?

The diagram below helps us visualize the possibilities. When does reality come into existence?

Clearly it is not at points 1, 2 or 3, because it isn’t until the “which way” detector is installed that we see the shift in reality. So is it due to the detector itself or the conscious observer reading the results of the detector. One could image experiments where the results of the “which way” detector are hidden from the conscious observer for an arbitrary period of time; maybe printed out and put in an envelope without looking, where it sits on the shelf for a day while the interference pattern exists. And someone opens the envelope and suddenly the interference pattern disappears. I have always suspected that the answer will be that reality comes into existence at point 4. I believe that it is just logical that a reality generating universe be efficient. Recent experiments bear this out.

I believe this says something incredibly fundamental about the nature of our reality. But what would efficiency have to do with the nature of reality? Let’s explore a little further – what kinds of efficiencies would this lead to?

POP QUIZ! – is reality analog or digital? There is actually no conclusion to this question and many papers have been written in support of either point of view. But if our reality is created on some sort of underlying construct, there is only one answer – it has to be digital. Here’s why…

How much information would it take to fully describe the cup of coffee on the right?

In an analog reality, it would take an infinite amount of information.

In a digital reality, fully modeled at the Planck resolution (what some people think is the deepest possible digital resolution), it would require 4*1071 bits/second give or take. It’s a huge number for sure, but infinitely less than the analog case.

But wait a minute.  Why would we need that level of information to describe a simple cup of coffee? So let’s ask a different question… How much information is needed for a subjective human experience of that cup of coffee – the smell, the taste, the visual experience. You don’t really need to know the position and momentum vector of each subatomic particle in each molecule of coffee in that cup. All you need to know is what it takes to experience it. The answer is roughly 1*109 bits/second. In other words, there could be as much as a 4*1062 factor of compression involved in generating a subjective experience. In other words, we don’t really need to know where each electron is in the coffee, just as you don’t need to know which slit each electron goes through in the double slit experiment. That is, UNTIL YOU MEASURE IT!

So, the baffling results of the double slit experiments actually make complete sense if reality is:

  • Digital
  • Compressed
  • Dynamically generated to meet the needs of the inhabitants of that reality

Sounds computational doesn’t it? In fact, if reality were a computational system, it would make sense for it to need to have efficiencies at this level. 

There are such systems – one well known example is a video game called No Man’s Sky that dynamically generates its universe as the user plays the game. Art inadvertently imitating life?

Earlier in this article I suggested that the concept of God could explain the stabilization effect of our reality. If we redefine “God” to mean “All That There Is” (of which, our apparent physical reality is only a part), reality becomes a “learning lab” that needs to be stable for our consciousnesses to interact virtually.

I wrote about this and proposed this model back in 2007 in my first book “The Universe-Solved!.”  In 2021, an impressive set of physicists and technologists came up with the same theory, which they called “The Autodidactic Universe.” They collaborated to explore methods, structures, and topologies in which the universe might be learning and modifying its laws according to what is needed. Such ideas included neural nets and Restricted Boltzman Machines. This provides an entirely different way of looking at any potential apocalypse. And it make you wonder…

UFO INTERVENTION

In 2021, over one hundred military personnel, including Retired Air Force Captain Robert Salas, Retired First Lieutenant Robert Jacobs, and Retired Captain David Schindele met at the National Press Club in Washington, DC to present historical case evidence that UFOs have been involved with disarming nuclear missiles. A few examples…

  • Malmstrom Air Force Base, Montana, 1967 – “a large glowing, pulsating red oval-shaped object hovering over the front gate,” as alarms went off showing nearly all 10 missiles shown in the control room had been disabled.
  • Minot Air Force Base, North Dakota, 1966 – Eight airmen said that 10 missiles at silos in the vicinity all went down with guidance and control malfunctions when an 80- to 100-foot wide flying object with bright flashing lights had hovered over the site.
  • Vandenberg Air Force Base, California, 1964 – “It went around the top of the warhead, fired a beam of light down on the top of the warhead.” After circling, it “then flew out the frame the same way it had come in.”
  • Ukraine, 1982 – launch countdowns were activated for 15 seconds while a disc-shaped UFO hovered above the base, according to declassified KGB documents

As the History Channel reported, areas of high UFO activity are correlated with nuclear and military facilities worldwide.

Perhaps UFOs are an artifact of our physical reality learning lab, under the control of some conscious entity or possibly even an autonomous (AI) bot in the system. As part of the “autodidactic” programming mechanisms that maintain stability in our programmed reality. Other mechanisms could involve things like adjusting the availability of certain resources or even nudging consciousnesses toward solutions to problems. If this model of reality is accurate, we may find that we have little to worry about regarding an AI apocalypse. Instead it will just be another force that contributes toward our evolution.

To that end, there is also a sector of thinkers who recommend a different approach. Rather than fight the AI progression, or simply let the chips fall, we should welcome our AI overlords and merge with them. That scenario will be explored in Part 10 of this series.

NEXT: How to Survive an AI Apocalypse – Part 10: If You Can’t Beat ’em, Join ’em

Wigner’s Friend likes Digital Consciousness

Apparently your reality may be different than mine. Wait, what???

Several recent studies have demonstrated to an extremely high degree of certainty that objective reality does not exist. This year, adding to the mounting pile of evidence for a consciousness-centric reality, came the results of an experiment that for the first time, tested the highly paradoxical Wigner’s Friend thought experiment. The conclusion was that your reality and my reality can actually be different. I don’t mean different in the sense that your rods and cones have a different sensitivity, or that your brain interprets things differently, but fundamentally intrinsically different. Ultimately, things may happen in your reality that might not happen in my reality and vice versa.

Almost sounds like a dream, doesn’t it? Or like you and I are playing some kind of virtual reality game and the information stream that is coming into your senses via your headset or whatever is different that the information stream coming into mine.

BINGO! That’s Digital Consciousness in a nutshell.

Eugene Paul Wigner received the Nobel Prize for physics in 1963 for his work on quantum mechanics and the structure of the atom. More importantly perhaps, he, along with Max Planck, Neils Bohr, John Wheeler, Kurt Godel, Erwin Schrodinger, and many other forward thinking scientists and mathematicians, opposed the common materialistic worldview shared by most scientists of his day (not to mention, most scientists of today). As such, he was an inspiration for, and a forerunner of consciousness-centric philosophies, such as my Digital Consciousness, Donald Hoffman’s MUI theory, and Tom Campbell’s My Big TOE.

As if Schrodinger’s Cat wasn’t enough to bend people’s minds, Wigner raised the stakes of quantum weirdness in 1961 when he proposed a thought experiment, referred to as “Wigner’s Friend.” In the scenario are two people, let’s say Wigner and his friend. One of them is in an enclosed space, hidden to the other and observes something like Schrodinger’s cat, further hidden in a box. At the time Wigner opens the box the wave function collapses, establishing whether or not the cat is dead. But the cat is still in superposition to Wigner’s friend, outside of the entire subsystem. Only when he opens the door to see Wigner and the result of the cat experiment, does his wave function collapse. Therefore, Wigner and his friend have differing interpretations of when reality become realized; hence different realities.

Fast forward to 2019, and scientists (Massimiliano Proietti, Alexander Pickston, Francesco Graffitti, Peter Barrow, Dmytro Kundys, Cyril Branciard, Martin Ringbauer, and Alessandro Fedrizzi) at Heriot-Watt University in Edinburgh, were finally able to test the paradox using double slits, lasers, and polarizers. The results confirmed Wigner’s hypothesis to a certainty of 5 standard deviations, which essentially means that objective reality doesn’t exist, and your and my realities can differ- to a certainty of 1 in 3.5 million!

Of course, I live for this stuff, because it simply adds one more piece of supporting evidence to my theory, Digital Consciousness. And it adds yet another nail in the coffin of that ancient scientific religion, materialism.

How does it work?

Digital Consciousness asserts that consciousness is primary; hence, all that we can truly know is what we each experience subjectively.  This experiment doesn’t necessarily prove that the fundamental construct of reality is information, but it is a lot more plausible that individual experiences based on virtual simulations are at the root of this paradox rather than, say, a complex violation of Hilbert space, allowing parallel realities based on traditional physical fields to intermingle.  As an analogy, imagine that you are playing an MMORPG (video game with many other simultaneous players) – it isn’t difficult to see how each individual could be having a slightly different experience, based perhaps on their skill level or something.  As information is the carrier of the experience, the information entering the consciousness of one player could easily be slightly different than the information entering the consciousness of another player. This is by far the simplest explanation, and by Occam’s Razor, supports my theory.

Too bad Wigner isn’t alive to see this experiment, or to ponder Digital Consciousness theory. But I’m sure his consciousness is having a good laugh.

 

Why the Universe Only Needs One Electron

According to renowned physicist Richard Feynman (recounted during his 1965 Nobel lecture)…

“I received a telephone call one day at the graduate college at Princeton from Professor Wheeler, in which he said, ‘Feynman, I know why all electrons have the same charge and the same mass.’ ‘Why?’ ‘Because, they are all the same electron!’”

John Wheeler’s idea was that this single electron moves through spacetime in a continuous world line like a big knot, while our observation of many identical but separate electrons is just an illusion because we only see a “slice” through that knot. Feynman was quick to point out a flaw in the idea; namely that if this was the case we should see as many positrons (electrons moving backward in time) as electrons, which we don’t.

But Wheeler, also known for his now accepted concepts like wormholes, quantum foam, and “it from bit”, may have been right on the money with this seemingly outlanish idea.

As I have amassed a tremendous set of evidence that our reality is digital and programmatic (some of which you can find here as well as many other blog posts), I will assume that to be the case and proceed from that assumption.

Next, we need to invoke the concept of a Finite State Machine (FSM), which is simply a computational system that is identified by a finite set of states whereby the rules that determine the next state are a function of the current state and one or more input events. The FSM may also generate a number of “outputs” which are also logical functions of the current state.

The following is an abstract example of a finite state machine:

A computational system, like that laptop on your desk that the cat sits on, is by itself a finite state machine. Each clock cycle gives the system a chance to compute a new state, which is defined by a logical combination of the current state and all of the input changes. A video game, a flight simulator, and a trading system all work the same way. The state changes in a typical laptop about 4 billion times per second. It may actually take many of these 250 picosecond clock cycles to result in an observable difference in the output of the program, such as the movement of your avatar on the screen. Within the big complex laptop finite state machines are many others running, such as each of those dozens or hundreds of processes that you see running when you click on your “activity monitor.” And within each of those FSMs are many others, such as the method (or “sub program”) that is invoked when it is necessary to generate the appearance of a new object on the screen.

There is also a concept in computer science called an “instance.” It is similar to the idea of a template. As an analogy, consider the automobile. Every Honda that rolls off the assembly line is different, even if it is the same model with the same color and same set of options. The reason it is different from another with the exact same specifications is that there are microscopic differences in every part that goes into each car. In fact, there are differences in the way that every part is connected between two cars of equal specifications. However, imagine if every car were exactly the same, down to the molecule, atom, particle, string, or what have you. Then we could say that each car is an instance of its template.

This would also be the case in a computer-based virtual reality. Every similar car generated in the computer program is an instance of the computer model of that car, which, by the way, is a finite state machine. Each instance can be given different attributes, however, such as color, loudness, or power. In some cases, such as a virtual racing game where the idea of a car is central to the game, each car may be rather unique in the way that it behaves, or responds to the inputs from the controller, so there may be many different FSMs for these different types of cars. However, for any program, there will be FSMs that are so fundamental that there only needs to be one of that type of object; for example, a leaf.

In our programmatic reality (what I like to call the Reality Learning Lab, or RLL), there are also FSMs that are so fundamental that there only needs to be one FSM for that type of object. And every object of that type is merely an instance of that FSM. Such as an electron.

An electron is fundamental. It is a perfect example of an object that should be modeled by a finite state machine. There is no reason for any two electrons to have different rules of behavior. They may have different starting conditions and different influences throughout their lifetime, but they would react to those conditions and influences with exactly the same rules. Digital Consciousness Theory provides the perfect explanation for this. Electrons are simply instances of the electron finite state machine. There is only one FSM for the electron, just as Wheeler suspected. But there are many instances of it. Each RLL clock cycle will result in the update of the state of each electron instance in our apparent physical reality.

So, in a very real sense, Wheeler was right. There is no need for anything other than the single electron FSM. All of the electrons that we experience are just instances and follow exactly the same rules. Anything else would be inefficient, and ATTI is the ultimate in efficiency.

 

Disproving the Claim that the LHC Disproves the Existence of Ghosts

Recent articles in dozens of online magazines shout things like: “The LHC Disproves the Existence of Ghosts and the Paranormal.”

To which I respond: LOLOLOLOLOL

There are so many things wrong with this backwards scientific thinking, I almost don’t know where to start.  But here are a few…

1. The word “disproves” doesn’t belong here. It is unscientific at best. Maybe use “evidence against one possible explanation for ghosts” – I can even begin to appreciate that. But if I can demonstrate even one potential mechanism for the paranormal that the LHC couldn’t detect, you cannot use the word “disprove.” And here is one potential mechanism – an unknown force that the LHC can’t explore because its experiments are designed to only measure interactions in the 4 forces physicists are aware of.

The smoking gun is Brian Cox’s statement “If we want some sort of pattern that carries information about our living cells to persist then we must specify precisely what medium carries that pattern and how it interacts with the matter particles out of which our bodies are made. We must, in other words, invent an extension to the Standard Model of Particle Physics that has escaped detection at the Large Hadron Collider. That’s almost inconceivable at the energy scales typical of the particle interactions in our bodies.” So, based on that statement, here are a few more problems…

2. “almost inconceivable” is logically inconsistent with the term “disproves.”

3. “If we want some sort of pattern that carries information about our living cells to persist…” is an invalid assumption. We do not need information about our cells to persist in a traditional physical medium for paranormal effects to have a way to propagate. They can propagate by a non-traditional (unknown) medium, such as an information storage mechanism operating outside of our classically observable means. Imagine telling a couple of scientists just 200 years ago about how people can communicate instantaneously via radio waves. Their response would be “no, that is impossible because our greatest measurement equipment has not revealed any mechanism that allows information to be transmitted in that manner.” Isn’t that the same thing Brian Cox is saying?

4. The underlying assumption is that we live in a materialist reality. Aside from the fact that Quantum Mechanics experiments have disproven this (and yes, I am comfortable using that word), a REAL scientist should allow for the possibility that consciousness is independent of grey matter and create experiments to support or invalidate such hypotheses. One clear possibility is the simulation argument. Out of band signaling is an obvious and easy mechanism for paranormal effects.  Unfortunately, the REAL scientists (such as Anton Zeilinger) are not the ones who get most of the press.

5. “That’s almost inconceivable at the energy scales typical of the particle interactions in our bodies” is also bad logic. It assumes that we fully understand the energy scales typical of the particle interactions in our bodies. If scientific history has shown us anything, it is that there is more that we don’t understand than there is that we do.

lhcghosts

Flexi Matter

Earlier this year, a team of scientists at the Max Planck Institute of Quantum Optics, led by Randolf Pohl, made a highly accurate calculation of the diameter of a proton and, at .841 fm, it turned out to be 4% less than previously determined (.877 fm).  Trouble is, the previous measurements were also highly accurate.  The significant difference between the two types of measurement was the choice of interaction particle: in the traditional case, electrons, and in Pohl’s case, muons.

Figures have been checked and rechecked and both types of measurements are solid.  All sorts of crazy explanations have been offered up for the discrepancy, but one thing seems certain: we they don’t really understand matter.

Ancient Greeks thought that atoms were indivisible (hence, the name), at least until Rutherford showed otherwise in the early 1900s.  Ancient 20th-century scientists thought that protons were indivisible, at least until Gell-Mann showed otherwise in the 1960s.

So why would it be such a surprise that the diameter of a proton varies with the type of lepton cloud that surrounds and passes through it?  Maybe the proton is flexible, like a sponge, and a muon, at 200 times the weight of an electron, exerts a much higher contractive force on it – gravity, strong nuclear, Jedi, or what have you.  Just make the measurements and modify your theory, guys.  You’ll be .000001% closer to the truth, enough to warrant an even bigger publicly funded particle accelerator.

If particle sizes and masses aren’t invariant, who is to say that they don’t change over time.  Cosmologist Christof Wetterich of the University of Heidelberg thinks this might be possible.  In fact, says Wetterich, if particles are slowly increasing in size, the universe may not be expanding after all.  His recent paper suggests that spectral red shift, Hubble’s famous discovery at Mount Wilson, that led the most widely accepted theory of the universe – the big bang, may actually be due to changing particle sizes over time.  So far, no one has been able to shoot a hole in his theory.

Oops.  “Remember what we said about the big bang being a FACT?  Never mind.”

Flexi-particles.  Now there is both evidence and major philosophical repercussions.

And still, The Universe – Solved! predicts there is no stuff.

The ultimate in flexibility is pure data.

data200

Bizarro Physics

All sorts of oddities emerge from equations that we have developed to describe reality.  What is surprising is that rather than being simply mathematical artifacts, they actually show up in our physical world.

Perhaps the first such bizarro (see DC Comics) entity was antimatter; matter with an opposite charge and spin.  A mathematical solution to Paul Dirac’s relativistic version of Schrödinger’s equation (it makes my head hurt just looking at it), antimatter was discovered 4 years after Dirac predicted it.

One of last year’s surprises was the negative frequencies that are solutions to Maxwell’s equations and have been shown to reveal themselves in components of light.

And, earlier this month, German physicists announced the ability to create a temperature below absolute zero.

So when we were told in physics class to throw out those “negative” solutions to equations because they were in the imaginary domain, and therefore had no basis in reality…uh, not so fast.

What I find interesting about these discoveries is the implications for the bigger picture.  If our reality were what most of us think it is – 3 dimensions of space, with matter and energy following the rules set forth by the “real” solutions to the equations of physics – one might say that reality trumps the math; that solutions to equations only make sense in the context of describing reality.

However, it appears to be the other way around – math trumps reality.  Solutions to equations previously thought to be in the “imaginary domain” are now being shown to manifest in our reality.

This is one more category of evidence that underlying our apparent reality are data and rules.  The data and rules don’t manifest from the reality; they create the reality.

Bizarro185 antimatter185

Just when you thought Physics couldn’t get any Stranger

Tachyons, entanglement, cold fusion, dark matter, galactic filaments.  Just when you thought physics couldn’t get any stranger…

– THE VERY COLD: Fractional Quantum Hall Effect: When electrons are magnetically confined and cooled to a third of a degree above absolute zero (See more here), they seem to break down into sub-particles that act in synchronization, but with fractional charges, like 1/3, or 3/7.

– THE VERY HIGH PRESSURE: Strange Matter: The standard model of physics includes 6 types of quarks, including the 2 (“up” and “down”) that make up ordinary matter.  Matter that consists of “strange” quarks, aka Strange Matter, would be 10 times as heavy as ordinary matter.  Does it exist?  Theoretically, at very high densities, such as the core of neutron stars, such matter may exist.  A 1998 space shuttle experiment seems to have detected some, but repeat experiments have not yielded the same results.

– THE VERY LARGE DIMENSIONAL: Multidimensional Space: String theories say that we live in a 10-dimensional space, mostly because it is the only way to make quantum mechanics and general relativity play nicely together.  That is, until physicist Garrett Lisi came along and showed how it could be done with eight dimensional space and objects called octonions.  String theorists were miffed, mostly because Lisi is not university affiliated and spends most of his time surfing in Hawaii.

– THE VERY HOT: Quark-Gloun Plasma: Heat up matter to 2 trillion degrees and neutrons and protons fall apart into a plasma of quarks called quark-gluon plasma.  In April of 2005, QGP appeared to have been created at the Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC).

My view on all this is that it is scientific business as usual.  100 years ago, we lived in a smaller world; a world described solely by Newtonian Mechanics, our ordinary everyday view of how the world works.  Then, along came relativity and quantum mechanics.  Technological advances in laboratory equipment and optics allowed us to push the limits of speed and validate Relativity, which ultimately showed that Newtonian Mechanics was just an approximation of the larger, more encompassing theory of Relativity at slow speeds.  Similarly we pushed the limits of probing the very small and validated Quantum Mechanics, which showed that Newtonian Mechanics was just an approximation of the larger, more encompassing theory of Quantum Mechanics at large scales.  In the 1960’s, we pushed the limits of heat and energy, discovered  and found that our Quantum Mechanical / Relativistic Theory of the world was really just an approximation at low temperatures of a larger theory that had to encompass Quantum Chromodynamics.  Now, we are pushing the limits of temperature, or the slowing down of particles, and discovering that there must be an even larger theory that describes the world, that explains the appearance of fractional charges at extremely low temperatures.  Why does this keep happening and where does it end?

Programmed Reality provides an explanation.  In fact, it actually provides two.

In one case, the programmers of our reality created a complex set of physical laws that we are slowly discovering.  Imagine a set of concentric spheres, with each successive level outward representing a higher level scientific theory of the world that encompasses faster speeds, higher temperatures, larger scales, colder temperatures, higher energies, etc.  How deep inside the sphere of knowledge are we now?  Don’t know, but this is a model that puts it in perspective.  It is a technological solution to the philosophy of Deism.

The second possibility is that as we humans push the limits of each successive sphere of physical laws that were created for us, the programmers put in place a patch that opens up the next shell of discovery, not unlike a game.  I prefer this model, for a number of reasons.  First of all, wouldn’t it be a lot more fun and interesting to interact with your creations, rather than start them on their evolutionary path and then pay no further attention?  Furthermore, this theory offers the perfect explanation for all of those scientific experiments that have generated anomalous results that have never been reproducible.  The programmers simply applied the patch before anyone else could reproduce the experiment.

Interestingly, throughout the years, scientists have fooled themselves into thinking that the discovery of everything was right around the corner.  In the mid-20th century, the ultimate goal was the Unified Field Theory.  Now, it is called a TOE, or Theory of Everything.

Let’s stop thinking we’re about to reach the end of scientific inquiry and call each successive theory a TOM, or Theory of More.

Because the only true TOE is Programmed Reality.  QED.

Gravity is Strange – Unless you understand Programmed Reality

Physicists tell us that gravity is one of the four fundamental forces of nature.  And yet it behaves quite differently than the other three.  A New Scientist article breaks down the oddities, a few of which are reproduced here:

– Gravity only pulls.  It doesn’t appear to have an opposing effect, like other forces do.  Notwithstanding the possibility that dark energy is an example of “opposite polarity” gravity, possibly due to unseen dimensions, there appears to be no solid evidence of it as there is with all other forces.

– The strength of other forces are comparable in magnitude, while gravity checks in at 40 orders of magnitude weaker.

– The fine-tuned universe, a favorite topic of this site, includes some amazing gravity-based characteristics.  The balance of early universe expansion and gravitational strength had to balance to within 1 part in 1,000,000,000,000,000 in order for life to form.

The Anthropic Principle explains all this via a combination of the existance of zillions (uncountably large number) of parallel universes with the idea that we can only exist in the one where all the variables line up perfectly for matter and life to form.  But that seems to me to be a pretty complex argument with a few embedded leaps of faith that make most religions look highly logical in comparison.

Then there is the Programmed Reality theory, which as usual, offers a perfect explanation without the need for the hand-waving Anthropic Principle and the “Many Worlds”
interpretation of quantum mechanics.  Gravity is not like other forces, so let’s not keeping trying to “force” it to be (pardon the pun.)  Instead, it is there to keep us grounded on the planet in which we play out our reality, offering the perfect balance of “pull” to keep every fly ball from flying out of the stadium (regardless of the illegal substance abuse of the hitter), to make kite flying a real possibility, and to enable a large number of other enriching activities.  While, at the same time, being weak enough to allow basketball players to dunk and planes to fly, and to enable a large number of other enriching activities.  Our scientists will continue the investigate the nature of gravity via increasingly complex projects like the LHC, unpeeling the layers of complexity that the programmers put in place to keep scientific endeavor, research, and employment moving forward.

Newton's apple  Warped spacetime

Non-locality Explained!

A great article in Scientific American, “A Quantum Threat to Special Relativity,” is well worth the read.

Locality in physics is the idea that things are only influenced by forces that are local or nearby.  The water boiling on the stovetop does so because of the energy imparted from the flame beneath.  Even the sounds coming out of your radio are decoded from the electromagnetic disturbance in the air next to the antenna, which has been propagating from the radio transmitter at the speed of light.  But, think we all, nothing can influence anything remotely without a “chain reaction” disturbance, which according to Einstein can not exceed the speed of light.

However, says Quantum Mechanics, there is something called entanglement.  No, not the kind you had with Becky under the bleachers in high school.  This kinds of entanglement says that particles that once “interacted” are forever entangled, whereby their properties are reflected in each other’s behavior.  For example, take 2 particles that came from the same reaction and separate them by galactic distances.  What one does, the other will follow.  This has been proven to a distance of at least 18 km and seems to violate Einstein’s theory of Special Relativity.

Einstein, of course, took issue with this whole concept in his famous EPR paper, preferring to believe that “hidden variables” were responsible for the effect.  But, in 1964, physicist John Bell developed a mathematical proof that no local theory can account for all of Quantum Mechanics experimental results.  In other words, the world is non-local.  Period.  It is as if, says the SciAm article, “a fist in Des Moines can break a nose in Dallas without affecting any other physical thing anywhere in the heartand. ”  Alain Aspect later performed convincing experiments that demonstrated this non-locality.  45 years after John Bell’s proof, scientists are coming to terms with the idea that the world is non-local and special relativity has limitations.  Both ideas are mind-blowing.

But, as usual, there are a couple of clever paradigms that get around it all, each of which are equally mind-blowing.  In one, our old friend the “Many Worlds” theory, zillions of parallel universes are spawned every second, which account for the seeming non-locality of reality.  In the other, “history plays itself out not in the three-dimensional spacetime of special relativity but rather this gigantic and unfamiliar configuration space, out of which the illusion of three-dimensionality somehow emerges.”

I have no problem explaining all of these ideas via programmed reality.

Special Relativity has to do with our senses, not with reality.  True simultaneity is possible because our reality is an illusion.  And there is no speed limit in the truer underlying construct.  So particles have no problem being entangled.

Many Worlds can be implemented by multiple instances of reality processes.  Anyone familiar with computing can appreciate how instances of programs can be “forked” (in Unix parlance) or “spawned” (Windows, VMS, etc.).  You’ve probably even seen it on your buggy Windows PC, when instances of browsers keep popping up like crazy and you can’t kill the tasks fast enough and end up either doing a hard shutdown or waiting until the little bastard blue-screens.  Well, if the universe is just run by a program, why can’t the program fork itself whenever it needs to, explaining all of the mysteries of QM that can’t be explained by wave functions.

And then there is “configuration space.”  Nothing more complex than multiple instances of the reality program running, with the conscious entity having the ability to move between them, experiencing reality and all the experimental mysteries of Quantum Mechanics.

Hey physicists – get your heads out of the physics books and start thinking about computer science!

(thanks to Poet1960 for allowing me to use his great artwork)

Non-locality explained