How to Survive an AI Apocalypse – Part 10: If You Can’t Beat ’em, Join ’em

PREVIOUS: How to Survive an AI Apocalypse – Part 9: The Stabilization Effect

In this marathon set of AI blogs, we’ve explored some of the existential dangers of ASI (Artificial Superintelligence) as well as some of the potential mitigating factors. It seems to me that there are three ways to deal with the coming upheaval that the technology promises to lead to…

Part 8 was for you Neo’s out there, while Part 9 was for you Bobby’s. But there is another possibility – merge with the beast. In fact, between wearable tech, augmented reality, genetic engineering, our digital identities, and Brain Computer Interfaces (BCIs), I would say it is very much already underway. Let’s take closer look at the technology that has the potential for the most impact – BCIs. They come in two forms – non-invasive and invasive.

NON-INVASIVE BCIs

Non-invasive transducers merely measure electrical activity generated by various regions of the brain. Mapping the waveform data to known patterns makes possible devices like EEGs and video game interfaces.

INVASIVE BCIs

Invasive BCIs, on the other hand, actually connect directly with tissue and nerve endings. Retinal implants, for example, take visual information from glasses or a camera array and feed it into retinal neurons by electrically stimulating them, resulting in some impression of vision. Other examples include Vagus Nerve Stimulators to help treat epilepsy and depression, and Deep Brain Stimulators to treat conditions like Parkinson’s disease.

The most trending BCI, though, has to be the Elon Musk creation, Neuralink. A device with thousands of neural connections is implanted on the surface of the brain. Initial applications targeted were primarily people with paralysis who could benefit from being able to “think” motion into their prosthetics. Like this monkey on the right. Playing Pong with his mind.

But the future possibilities include the ability to save memories to the cloud, replay them on demand, and accelerated learning. I know Kung Fu.

And, as with any technology, it isn’t hard to imagine some of the potential dark sides to its usage. Just ask the Governator.

INTERCONNECTED MINDS

So if brain patterns can be used to control devices, and vice versa, could two brains be connected together and communicate? In 2018, researchers from several universities collaborated on an experiment where three subjects had their brains somewhat interconnected via EEGs as they collectively played a game of Tetris. Two of the subjects told the third, via only their thoughts, which direction to rotate a Tetris piece to fit into a row that the third could not see. Accuracy was 81.25% (versus 50% if random).

Eventually, we should be able to connect all or a large portion of the minds of humanity to each other and/or to machines, creating a sort of global intelligence.

This is the dream of the transhumanists, the H+ crowd, and the proponents of the so called technological singularity. Evolve your body to not be human anymore. In such a case, would we even need to worry about an AI Apocalypse? Perhaps not, if we were to form a singleton with ASI, encompassing all of the information on the planet. But how likely will that be? People on 90th St can’t even get along with people on 91st St. The odds that all of the transhumanists on the planet will merge with the same AI is pretty much zero. Which implies competing superhumans. Just great.

THE IMMORTALITY ILLUSION

In fact the entire premise of the transhumanists is flawed. The idea is that with a combination of modified genetics and the ability to “upload your consciousness” to the cloud, you can then “live long enough to live forever.” Repeating a portion of my blog “Transhumanism and Immortality – 21st Century Snake Oil,” the problem with this mentality is that we are already immortal! And there is a reason why our corporeal bodies die – simply put, we live our lives in this reality in order to evolve our consciousness, one life instance at a time. If we didn’t die, our consciousness evolution would come to a grinding halt, as we spend the rest of eternity playing solitaire. The “Universe” or “All that there is” evolves through our collective individuated consciousnesses. Therefore, deciding to be physically immortal could be the end of the evolution of the Universe itself. Underlying this unfortunate direction of Transhumanism is the belief (and, I can’t stress this enough, it is ONLY that – a belief) that it’s lights out when we die. Following a train of logic, if this were true, consciousness only emerges from brain function, we have zero free will, and the entire universe is a deterministic machine. So why even bother with Transhumanism if everything is predetermined? It is logically inconsistent. Material Realism, the denial of the duality of mind and body, is a dogmatic Religion. Its more vocal adherents (just head on over to JREF to find them) are as ignorant to the evidence and as blind to what true science is as the most bass-ackward fundamentalist religious zealots. The following diagram demonstrates the inefficiency of artificially extending life, and the extreme inefficiency of uploading consciousness.

In fact, you will not upload. At best you will have an apparent clone in the cloud which will diverge from your life path. It will not have free will nor be self aware.

When listening to the transhumanists get excited about such things, I am reminded of the words of the great Dr. Ian Malcolm from Jurassic Park…

In summary, this humble blogger is fine with the idea of enhancing human functions with technology, but I have no illusions that merging with AI will stave off an AI apocalypse; nor will it provide you with immortality.

So where does that leave us? We have explored many of the scenarios where rapidly advancing AI can have a negative impact on humanity. We’ve looked at the possibilities of merging with them, and the strange stabilization effort that seems to permeate our reality. In the next and final part of this series, we will take a systems view, put it all together and see what the future holds.

NEXT: How to Survive an AI Apocalypse – Part 11: Conclusion

How to Survive an AI Apocalypse – Part 9: The Stabilization Effect

PREVIOUS: How to Survive an AI Apocalypse – Part 8: Fighting Back

Here’s where it gets fun.

Or goes off the rails, depending on your point of view.

AI meets Digital Philosophy meets Quantum Mechanics meets UFOs.

This entire blog series has been about surviving an AI-based Apocalypse, a very doomsday kind of event. For some experts, this is all but inevitable. You readers may be coming to a similar conclusion.

But haven’t we heard this before? Doomsday prophesies have been around as long as… Keith Richards. The Norse Ragnarök, The Hindu prophecy of the end of times during the current age of Kaliyuga, the Zoroastrian Renovation, and of course, the Christian Armageddon. An ancient Assyrian tablet dated 2800-2500 BCE tells of corruption and unruly teenagers and prophecies that “earth is in its final days; the world is slowly deteriorating into a corrupt society that will only end with its destruction.” Fast forward to the modern era, where the Industrial Revolution was going to lead to the world’s destruction. We have since had the energy crisis, the population crisis, and the doomsday clock ticking down to nuclear armageddon. None of it ever comes to pass.

Is the AI apocalypse more of the same, or is it frighteningly different in some way? This Part 9 of the series will examine such questions and present a startling conclusion that all may be well.

THE NUCLEAR APOCALYPSE

To get a handle on the likelihood of catastrophic end times, let’s take a deep dive into the the specter of a nuclear holocaust.

It’s hard for many of us to appreciate what a frightening time it was in the 1950s, as people built fallout shelters and children regularly executed duck and cover drills in the classrooms.

Often considered to be the most dangerous point of the cold war, the 1962 Cuban Missile Crisis was a standoff between the Soviet Union and the United States involving the deployment of Soviet missiles in Cuba. At one point the US Navy began dropping depth charges to force a nuclear-armed Soviet submarine to surface. The crew on the sub, having had no radio communication with the outside world didn’t know if war was breaking out or not. The captain, Valentin Savitsky, wanted to launch a nuclear weapon, but a unanimous decision among the three top officers was required for launch. Vasily Arkhipov, the second in command, was the sole dissenting vote and even got into an argument with the other two officers. His courage effectively prevented the nuclear war that was likely to result. Thomas S Blanton, later the director of the US National Security Archive called Arkhipov “the man who saved the world.”

But that wasn’t the only time we were a hair’s breadth away from the nuclear apocalypse.

On May 23, 1967, US military commanders issued a high alert due to what appeared to be jammed missile detection radars in Alaska, Greenland, and the UK. Considered to be an act of war, they authorized preparations for war, including the deployment of aircraft armed with nuclear weapons. Fortunately, a NORAD solar forecaster identified the reason for the jammed radar – a massive solar storm.

Then, on the other side of the red curtain, on 26 September 1983, with international tensions still high after the recent Soviet military shoot down of Korean Air Lines Flight 007, a nuclear early-warning system in Moscow reported that 5 ICBMs (intercontinental ballistic missiles) had been launched from the US. Lieutenant colonel Stanislav Petrov was the duty officer at the command center and suspected a false alarm, so he awaited confirmation before reporting, thereby disobeying Soviet protocol. He later said that had he not been on the shift at that time, his colleagues would have reported the missile launch, likely triggering a nuclear war.

In fact, over the years there have been at least 21 nuclear war close calls, any of which could easily led to a nuclear conflagration and the destruction of humanity. The following timeline, courtesy of the Future of Life Institute, shows how many occurred in just the 30-year period from 1958 to 1988.

It kinds of makes you wonder what else could go wrong…

END OF SOCIETY PREDICTED

Another modern age apocalyptic fear was driven by the recognition that exponential growth and limited resources are ultimately incompatible. At the time, the world population was growing exponentially and important resources like oil and arable land were being depleted. The Rockefeller Foundation partnered with the OECD (Organization for Economic Cooperation and Development) to form The Club of Rome, a group of current and former heads of state, scientists, economists, and business leaders to discuss the problem and potential solutions. In 1972, with the support of computational modeling from MIT, they issued their first report, The Limits to Growth, which painted a bleak picture of the world’s future. Some of the predictions (and their ultimate outcomes) follow:

Another source for this scare was the book The Population Bomb by Stanford biologist Paul Ehrlich. He and people like Harvard biologist George Wald also made some dire predictions…

There is actually no end to failed environmental apocalyptic predictions – too many to list. But a brief smattering includes:

  • “Unless we are extremely lucky, everyone will disappear in a cloud of blue steam in 20 years.” (New York Times, 1969)
  • “UN official says rising seas to ‘obliterate nations’ by 2000.” (Associated Press, 1989)
  • “Britain will Be Siberian in less than 20 years” (The Guardian, 2004)
  • “Scientist Predicts a New Ice Age by 21th Century” (Boston Globe, 1970)
  • “NASA scientist says we’re toast. In 5-10 years, the arctic will be ice free.” (Associated Press, 2008)

Y2K

And who could forget this apocalyptic gem…

My intent is not to cherry pick the poor predictions and make fun of them. It is simply that when we are swimming in the sea of impending doom, it is really hard to see the way out. And yet, there does always seem to be a way out. 

Sometimes it is mathematical. For example, there was a mathematical determination of when we would run out of oil based on known supply and rate of usage, perhaps factoring in the trend of increase in rate of usage. But what were not factored into the equation were the counter effects of the rate of new reserves being discovered and the improvements in engine efficiencies. One could argue that in the latter case, the scare achieved its purpose, just as the fear of global warming has resulted in a number of new environmental policies and laws, such as California’s upcoming ban on gasoline powered vehicles in 2035. However, that isn’t always the case. Many natural resources, for instance, seem to actually be increasing in supply. I am not necessarily arguing for something like the abiotic oil theory. However, at the macro level, doesn’t it sometimes feel like a game of civilization, where we are given a set of resources, cause and effect interrelationships, and ability to acquire certain skills. In the video game, when we fail on an apocalyptic level, we simply hit the reset button and start over. But in real life we can’t do that. Yet, doesn’t it seem like the “game makers” always hand us a way out, such as unheard of new technologies that are seemingly suddenly enabled? And it isn’t always human ingenuity that saves us? Sometimes, the right person is on duty at the perfect time against all odds. Sometimes, oil fields magically replenish on their own. Sometimes asteroids strike the most remote place on the planet.

THE STABILIZATION EFFECT

In fact, it seems statistically significant that apocalypses, while seemingly imminent, NEVER really occur. So much so that I decided to model it with a spreadsheet using random number generation (also demonstrating how weak my programming skills have gotten). The intent of the model is to encapsulate the state of humanity on a simple timeline using a parameter called “Mood” for lack of a better term. We start at a point in society that is neither euphoric (the Roaring Twenties) nor disastrous (the Great Depression). As time progresses, events occur that push the Mood in one direction or the other, with a 50/50 chance of either occurring. The assumption in this model is that no matter what the Mood is, it can still get better or worse with equal probability. Each of the following graphs depicts a randomly generated timeline.

On the graph are two thresholds – one of a positive nature, where things seemingly can’t get much better, and one of a negative nature, whereby all it should take is a nudge to send us down the path to disaster. In any of the situations we’ve discussed in this part of the series, when we are on the brink of apocalypse, the statistical likelihood that the situation would improve at that point should not be more than 50/50. If true, running a few simulations shows that an apocalypse is actually fairly likely. Figures 1 and 3 pop over the positive limit and then turn back toward neutral. Figure 2 seems to take off in the positive direction even after passing the limit. Figure 4 hits and goes through the negative limit several times, implying that if our reality actually worked this way, apocalyptic situations would actually be likely.

However, what always seems to happen is that when things get that bad, there is a stabilizing force of some sort. I made an adjustment to my reality model by inserting some negative feedback to model this stabilizing effect. For those unfamiliar with the term, complex systems can have positive or negative feedback loops; often both. Negative feedback tends to bring a system back to a stable state. Examples in the body include the maintenance of body temperature and blood sugar levels. If blood sugar gets too high, the pancreas secretes insulin which chemically reduces the level. When it gets too low, the pancreas secretes glucagon which increases the level. In nature, when the temperature gets high, cloud level increases, which provides the negative feedback needed to reduce the temperature. Positive feedback loops also exist in nature. The runaway greenhouse effect is a classic example.

When I applied the negative feedback to the reality model, all curves tended to stay within the positive and negative limits, as show below.

Doesn’t it feel like this is how our reality works at the most fundamental level? But how likely would it be that every aspect of our reality is subject to negative feedback? And where does that negative feedback come from?

REALITY IS ADAPTIVE

This is how I believe that reality works at its most fundamental level…

Why would that be? Two obvious ideas come to mind.

  1. Natural causes – this would be the viewpoint of reductionist materialist scientists. Heat increase causes ice sheets to melt which creates more water vapor, generating more clouds, reducing the heating effect of the sun. But this does not at all explain why the human condition, and the civilization trends that we’ve discussed in this article, always tend toward neutral.
  2. God – this would be the viewpoint of people whose beliefs are firmly grounded in their religion. God is always intervening to prevent catastrophes. But apparently God doesn’t mind minor catastrophes and plenty of pain and suffering in general. More importantly though, this does not explain dynamic reality generation.

DYNAMIC REALITY GENERATION

Enter Quantum Mechanics.

The Double-slit experiment was first done by Thomas Young back in 1801, and was an attempt to determine if light was composed of particles or waves. A beam of light was projected at a screen with two vertical slits. If light was composed of particles, only two bands of light should be on the phosphorescent screen behind the one with the slits. If wave-based, an interference pattern should result. The wave theory was initially confirmed experimentally, but that was later called into question by Einstein and others. 

The experiment was later done with particles, like electrons, and it was clearly assumed that these would be shown to be hard fixed particles, generating the expected pattern shown on the right.

However, what resulted was an interference pattern, implying that the electrons were actually waves. Thinking that perhaps electrons were interfering with each other, the experiment was modified to shoot one electron at a time. And still the interference pattern slowly build up on the back screen.

To make sense of the interference pattern, experimenters wondered if they could determine which slit each electron went through, so they put a detector before the double list. Et voila, the interference pattern disappeared! It was as if the actual conscious act of observation converted the electrons from waves to particles. The common interpretation was that the electrons actual exist only a probability function and the observation actually snaps them into existence.

It is very much like the old adage that a tree falling in the woods makes no sound unless someone is there to see it. Of course, this idea of putting consciousness as a parameter in the equations of physics generated no end of consternation for the deterministic materialists. They have spent the last twenty years designing experiments to disprove this “Observer Effect” to no avail. Even when the “which way” detector is place after the double slit, the interference pattern disappears. The only tenable conclusion is that reality does not exist in an objective manner and its instantiation depends on something. But what?

The diagram below helps us visualize the possibilities. When does reality come into existence?

Clearly it is not at points 1, 2 or 3, because it isn’t until the “which way” detector is installed that we see the shift in reality. So is it due to the detector itself or the conscious observer reading the results of the detector. One could image experiments where the results of the “which way” detector are hidden from the conscious observer for an arbitrary period of time; maybe printed out and put in an envelope without looking, where it sits on the shelf for a day while the interference pattern exists. And someone opens the envelope and suddenly the interference pattern disappears. I have always suspected that the answer will be that reality comes into existence at point 4. I believe that it is just logical that a reality generating universe be efficient. Recent experiments bear this out.

I believe this says something incredibly fundamental about the nature of our reality. But what would efficiency have to do with the nature of reality? Let’s explore a little further – what kinds of efficiencies would this lead to?

POP QUIZ! – is reality analog or digital? There is actually no conclusion to this question and many papers have been written in support of either point of view. But if our reality is created on some sort of underlying construct, there is only one answer – it has to be digital. Here’s why…

How much information would it take to fully describe the cup of coffee on the right?

In an analog reality, it would take an infinite amount of information.

In a digital reality, fully modeled at the Planck resolution (what some people think is the deepest possible digital resolution), it would require 4*1071 bits/second give or take. It’s a huge number for sure, but infinitely less than the analog case.

But wait a minute.  Why would we need that level of information to describe a simple cup of coffee? So let’s ask a different question… How much information is needed for a subjective human experience of that cup of coffee – the smell, the taste, the visual experience. You don’t really need to know the position and momentum vector of each subatomic particle in each molecule of coffee in that cup. All you need to know is what it takes to experience it. The answer is roughly 1*109 bits/second. In other words, there could be as much as a 4*1062 factor of compression involved in generating a subjective experience. In other words, we don’t really need to know where each electron is in the coffee, just as you don’t need to know which slit each electron goes through in the double slit experiment. That is, UNTIL YOU MEASURE IT!

So, the baffling results of the double slit experiments actually make complete sense if reality is:

  • Digital
  • Compressed
  • Dynamically generated to meet the needs of the inhabitants of that reality

Sounds computational doesn’t it? In fact, if reality were a computational system, it would make sense for it to need to have efficiencies at this level. 

There are such systems – one well known example is a video game called No Man’s Sky that dynamically generates its universe as the user plays the game. Art inadvertently imitating life?

Earlier in this article I suggested that the concept of God could explain the stabilization effect of our reality. If we redefine “God” to mean “All That There Is” (of which, our apparent physical reality is only a part), reality becomes a “learning lab” that needs to be stable for our consciousnesses to interact virtually.

I wrote about this and proposed this model back in 2007 in my first book “The Universe-Solved!.”  In 2021, an impressive set of physicists and technologists came up with the same theory, which they called “The Autodidactic Universe.” They collaborated to explore methods, structures, and topologies in which the universe might be learning and modifying its laws according to what is needed. Such ideas included neural nets and Restricted Boltzman Machines. This provides an entirely different way of looking at any potential apocalypse. And it make you wonder…

UFO INTERVENTION

In 2021, over one hundred military personnel, including Retired Air Force Captain Robert Salas, Retired First Lieutenant Robert Jacobs, and Retired Captain David Schindele met at the National Press Club in Washington, DC to present historical case evidence that UFOs have been involved with disarming nuclear missiles. A few examples…

  • Malmstrom Air Force Base, Montana, 1967 – “a large glowing, pulsating red oval-shaped object hovering over the front gate,” as alarms went off showing nearly all 10 missiles shown in the control room had been disabled.
  • Minot Air Force Base, North Dakota, 1966 – Eight airmen said that 10 missiles at silos in the vicinity all went down with guidance and control malfunctions when an 80- to 100-foot wide flying object with bright flashing lights had hovered over the site.
  • Vandenberg Air Force Base, California, 1964 – “It went around the top of the warhead, fired a beam of light down on the top of the warhead.” After circling, it “then flew out the frame the same way it had come in.”
  • Ukraine, 1982 – launch countdowns were activated for 15 seconds while a disc-shaped UFO hovered above the base, according to declassified KGB documents

As the History Channel reported, areas of high UFO activity are correlated with nuclear and military facilities worldwide.

Perhaps UFOs are an artifact of our physical reality learning lab, under the control of some conscious entity or possibly even an autonomous (AI) bot in the system. As part of the “autodidactic” programming mechanisms that maintain stability in our programmed reality. Other mechanisms could involve things like adjusting the availability of certain resources or even nudging consciousnesses toward solutions to problems. If this model of reality is accurate, we may find that we have little to worry about regarding an AI apocalypse. Instead it will just be another force that contributes toward our evolution.

To that end, there is also a sector of thinkers who recommend a different approach. Rather than fight the AI progression, or simply let the chips fall, we should welcome our AI overlords and merge with them. That scenario will be explored in Part 10 of this series.

NEXT: How to Survive an AI Apocalypse – Part 10: If You Can’t Beat ’em, Join ’em

How to Survive an AI Apocalypse – Part 7: Elimination

PREVIOUS: How to Survive an AI Apocalypse – Part 6: Cultural Demise

At this point, we’ve covered a plethora (my favorite word from high school) of AI-Run-Amok scenarios – Enslavement, Job Elimination, Cultural Demise, Nanotech Weaponization… it’s been a fun ride, but we are only just getting to the pièce de résistance: The Elimination of Humanity.

Now, lest you think this is a just lot of Hollywood hype or doomsayer nonsense, let me remind you that no less of personages than Stephen Hawking and Elon Musk, and no less of authority figures than OpenAI CEO Sam Altman and Oxford Philosopher Nick Bostrom have sounded the alarm: “Risk of Extinction.” Bostrom’s scenario from Part 3 of this series:

But, can’t we just program in Ethics?

Sounds good, in principle. 

There are two types of goals that an AI will respond to: Terminal and Instrumental. Terminal (sometimes called “final” or “absolute”) goals are the ones that are the ultimate objectives programmed into the AI, such as “solve the Riemann hypothesis” or “build a million paper clips.” Ethical objectives would be the supplementary terminal goals that we might try to give to AIs to prevent Elimination or even some less catastrophic scenario.

Instrumental goals are intermediary objectives that might be needed to fulfill the terminal goal, such as “learn to impersonate a human” or “acquire financial resources.” Intelligent beings, both human and AI, will naturally develop and pursue instrumental goals to achieve their objectives, a behavior known as Instrumental Convergence. The catch, however, is that the instrumental goals are unpredictable and often seemingly uncorrelated with the terminal goal. This is part of the reason why AIs are so good at specification gaming. It is also the main reason that people like Bostrom fear ASI. He developed the “paperclip maximizer” thought experiment. What follows is his initial thought experiment, plus my own spin on what might happen as we attempt to program in ethics by setting some supplementary ethical terminal goals…

We attempt to program in ethics…

But the AI doesn’t know how to make paperclips without some harm to a human. Theoretically, even manufacturing a single paperclip could have a negative impact on humanity. It’s only a matter of how much. So we revise the first terminal goal…

Must have been how it interpreted the word “clearly.” You’re starting to see the problem. Let’s try to refine the terminal goals one more time…

In fact, there are many things that can go wrong and lead an ASI to the nightmare scenario…

  • The ASI might inadvertently overwrite it’s own rule set during a reboot due to a bug in the system, and destroy humanity
  • Or, a competitor ignores the ethics ruleset in order to make paperclips faster, thereby destroying humanity
  • Or, a hacker breaks in and messes with the ethics programming, resulting in the destruction of humanity
  • Or, the ASI ingests some badly labelled data, leading to “model drift”, and destroying humanity

I’m sure you can think of many more. But, the huge existential problem is…

YOU ONLY GET ONE CHANCE! 

Mess up the terminal goals and, with the AI’s natural neural-network-based unpredictability, it could be lights out. Am I oversimplifying? Perhaps, but simply considering the possibilities should raise the alarm. Let’s pull Albert in to weigh in on the question “Can’t we just program in ethics?”

And for completeness, let’s add Total Elimination on our AI-Run-Amok chart…

The severity of that scenario is obviously the max. And, unfortunately, with what we know about instrumental convergence, unpredictability, and specification gaming, it is difficult to not see that apocalyptic scenario be quite likely. Also, note in the hands of an AI seeking weapons, foglets become much more dangerous than they were under human control.

Now, before you start selling off your assets, packing your bug out bag, and moving to Tristan Da Cunha, please read my next blog installment on how we can fight back through various mitigation strategies.

NEXT: How to Survive an AI Apocalypse – Part 8: Fighting Back

Disproving the Claim that the LHC Disproves the Existence of Ghosts

Recent articles in dozens of online magazines shout things like: “The LHC Disproves the Existence of Ghosts and the Paranormal.”

To which I respond: LOLOLOLOLOL

There are so many things wrong with this backwards scientific thinking, I almost don’t know where to start.  But here are a few…

1. The word “disproves” doesn’t belong here. It is unscientific at best. Maybe use “evidence against one possible explanation for ghosts” – I can even begin to appreciate that. But if I can demonstrate even one potential mechanism for the paranormal that the LHC couldn’t detect, you cannot use the word “disprove.” And here is one potential mechanism – an unknown force that the LHC can’t explore because its experiments are designed to only measure interactions in the 4 forces physicists are aware of.

The smoking gun is Brian Cox’s statement “If we want some sort of pattern that carries information about our living cells to persist then we must specify precisely what medium carries that pattern and how it interacts with the matter particles out of which our bodies are made. We must, in other words, invent an extension to the Standard Model of Particle Physics that has escaped detection at the Large Hadron Collider. That’s almost inconceivable at the energy scales typical of the particle interactions in our bodies.” So, based on that statement, here are a few more problems…

2. “almost inconceivable” is logically inconsistent with the term “disproves.”

3. “If we want some sort of pattern that carries information about our living cells to persist…” is an invalid assumption. We do not need information about our cells to persist in a traditional physical medium for paranormal effects to have a way to propagate. They can propagate by a non-traditional (unknown) medium, such as an information storage mechanism operating outside of our classically observable means. Imagine telling a couple of scientists just 200 years ago about how people can communicate instantaneously via radio waves. Their response would be “no, that is impossible because our greatest measurement equipment has not revealed any mechanism that allows information to be transmitted in that manner.” Isn’t that the same thing Brian Cox is saying?

4. The underlying assumption is that we live in a materialist reality. Aside from the fact that Quantum Mechanics experiments have disproven this (and yes, I am comfortable using that word), a REAL scientist should allow for the possibility that consciousness is independent of grey matter and create experiments to support or invalidate such hypotheses. One clear possibility is the simulation argument. Out of band signaling is an obvious and easy mechanism for paranormal effects.  Unfortunately, the REAL scientists (such as Anton Zeilinger) are not the ones who get most of the press.

5. “That’s almost inconceivable at the energy scales typical of the particle interactions in our bodies” is also bad logic. It assumes that we fully understand the energy scales typical of the particle interactions in our bodies. If scientific history has shown us anything, it is that there is more that we don’t understand than there is that we do.

lhcghosts

Pathological Skepticism

“All great truths began as blasphemies” – George Bernard Shaw

  • In the 1800’s, the scientific community viewed reports of rocks falling from the sky as “pseudoscience” and those who reported them as “crackpots,” only because it didn’t fit in with the prevailing view of the universe. Today, of course, we recognize that these rocks could be meteorites and such reports are now properly investigated.
  • In 1827, Georg Ohm’s initial publication of what became “Ohm’s Law” met with ridicule, dismissal, and was called “a web of naked fantasies.” The German Minister of Education proclaimed that “a professor who preached such heresies was unworthy to teach science.” 20 yrs passed before scientists began to recognize its importance.
  • Louis Pasteur’s theory of germs was called “ridiculous fiction” by Pierre Pachet, Professor of Physiology at Toulouse in1872.
  • Spanish researcher Marcelino de Sautuola discovered cave art in Altamira cave (northern Spain), which he recognized as stone age and published a paper about it in 1880.  His integrity was violently attacked by the archaeological community, and he died disillusioned and broken.  Yet he was vindicated 10 years after death.
  • Lord Haldane, the Minister of War in Britain, said that “the aeroplane will never fly” in 1907.  Ironically, this was four years after the Wright Brothers made their first successful flight at Kitty Hawk, North Carolina.  After Kitty Hawk, the Wrights flew in open fields next to a busy rail line in Dayton OH for almost an entire year. US authorities refused to come to the demos, while Scientific American published stories about “The Lying Brothers.”
  • In 1964, physicist George Zweig proposed the existence of quarks.  As a result of this theory, he was rejected for position at major university and considered a “charlatan.”  Today, of course, it is an accepted part of standard nuclear model.

Note that these aren’t just passive disagreements.  The skeptics use active and angry language, with words like “charlatan,” “ridiculous,” lying,” “crackpot,” and “pseudoscience.”

This is partly due to a natural psychological effect, known as “fear of the unknown” or “fear of change.”  Psychologists who have studied human behavior have more academic sounding names for it, such as the “Mere Exposure Effect”, “Familiarity Principle”, or Neophobia (something that might have served Agent Smith well).  Ultimately, this may be an artifact of evolution.  Hunter-gatherers did not pass on their genes if they had a habit of eating weird berries, venturing too close to the saber-toothed cats, or other unconventional activities.  But we are no longer hunter-gatherers.  For the most part, we shouldn’t fear the unknown.  We should feel empowered to challenge assumptions.  The scientific method can weed out any undesirable ideas naturally.

But, have you also noticed how the agitation ratchets up the more you enter the realm of the “expert?”

“The expert knows more and more about less and less until he knows everything about nothing.” – Mahatma Gandhi

This is because the expert may have a lot to lose if they stray too far from the status quo.  Their research funding, tenure, jobs, reputations are all at stake.  This is unfortunate, because it feeds this unhealthy behavior.

So I thought I would do my part to remind experts and non-experts alike that breakthroughs only occur when we challenge conventional thinking, and we shouldn’t be afraid of them.

The world is full of scared “experts”, but nobody will ever hear of them.  But they will hear about the brave ones, who didn’t fear to challenge the status quo.  People like Copernicus, Einstein, Georg Ohm, Steve Jobs, and Elon Musk.

And it isn’t like we are so enlightened today that such pathological skepticism no longer occurs.

Remember Stanley Pons and Martin Fleischmann?  Respected electrochemists, ridiculed out of their jobs and their country by skeptics.  Even “experts” violently contradicted each other:

  • “It’s pathological science,” said physicist Douglas Morrison, formerly of CERN. “The results are impossible.”
  • “There’s very strong evidence that low-energy nuclear reactions do occur” said George Miley (who received Edward Teller medal for research in hot fusion.). “Numerous experiments have shown definitive results – as do my own.”

Some long-held assumptions are being overturned as we speak.  Like LENR (Low Energy Nuclear Reactions; the new, less provocative name for cold fusion.

And maybe the speed of light as an ultimate speed limit.

These are exciting times for science and technology.  Let’s stay open minded enough to keep them moving.

DNA: Evidence of Intelligent Design or Byproduct of Evolution?

DNA is a self-replicating nucleic acid that supposedly encodes the instructions for building and maintaining cells of an organism.  With an ordered grouping of over a billion chemical base pairs which are identical for each cell in the organism, the unique DNA for a particular individual looks kind of like statements in a programming language.  This concept is not lost on Dr. Stephen Meyer (Ph.D., history and philosophy of science, Cambridge University), who posits that the source of information must be intelligent and therefore DNA, as information, is evidence of Intelligent Design.  He argues that all hypotheses that account for the development of this digital code, such as self-organization and RNA-first, have failed.  In a well publicized debate with Dr. Peter Atkins (Ph.D., theoretical chemistry, University of Leicester), a well known atheist and secular humanist, Atkins counters that information can come from natural mechanisms.  Sadly, Atkins resorts to insults and name calling, so the debate is kind of tainted, and he never got a chance to present his main argument in a methodical way because he let his anger get the best of him.  But it raised some very interesting questions, which I don’t think either side of the argument has really gotten to the bottom of.

ID’ers trot out the Second Law of Thermodynamics and state that the fact that simple molecules can’t self replicate without violating that Law proves Intelligent Design.  But it doesn’t really.  The Second Law applies to the whole system, including many instances of increased disorder weighed against the fewer instances of increased order.  Net net, disorder TENDs to increase, but that doesn’t mean that there can’t be isolated examples of increased order in the universe. That seems to leave the door open to the possibility that one such example might be the creation of self-replicating molecules.

Another point of contention is about the nature of information, such as DNA.  Meyer is wrong if he is making a blanket assertion that information can only come from intelligence.  I could argue that, given a long enough period of time, if you leave a typewriter outdoors, hailstones will ultimately hit the keys in an order that creates recognizable poetry.  So the question boils down to this – was there enough time and proper conditions for evolutionary processes to create the self-replicating DNA molecule from non-self replicating molecules necessary for creating the mechanism for life?

The math doesn’t look good for the atheists.  Dr. Robert L. Piccioni, Ph.D., Physics from Stanford says that the odds of 3 billion randomly arranged base-pairs matching human DNA is about the same as drawing the ace of spades one billion times in a row from randomly shuffled decks of cards.  Harold Morowitz, a renowned physicist from Yale University and author of Origin of Cellular Life  (1993), declared that the odds for any kind of spontaneous generation of life from a combination of the standard life building blocks is one chance in 10E100000000000 (you read that right, that’s 1 followed by 100,000,000,000 zeros).  Famed British Royal Astronomer Sir Fred Hoyle, proposed that such odds were one chance in 10E40000, or roughly “the same as the probability that a tornado sweeping through a junkyard could assemble a 747.”  By the way, scientists generally set their “Impossibility Standard” at one chance in 10E50 (1 in a 100,000 billion, billion, billion, billion, billion).  So, the likelihood that life formed via combinatorial chemical evolution (the only theory that scientists really have) is, for all intents and purposes, zero.

Atkins, Dawkins, and other secular humanists insist that materialism and naturalism are pre-supposed and that there is no argument for the introduction of the logic of intelligence into science.  That sounds to me to be pretty closed minded, and closes the door a priori on certain avenues of inquiry.  Imagine if that mentality were applied to string theory, a theory which has no experimental evidence to start with.  One has to wonder why science is so illogically selective with respect to the disciplines that it accepts into its closed little world.

My interest in this goes beyond this specific debate.  I have a hobby of collecting evidence that our reality is programmed.  I’m not sure yet whether DNA has a place in that collection yet.  It will definitely need a little more thought.

 

dna_500

Gravity is Strange – Unless you understand Programmed Reality

Physicists tell us that gravity is one of the four fundamental forces of nature.  And yet it behaves quite differently than the other three.  A New Scientist article breaks down the oddities, a few of which are reproduced here:

– Gravity only pulls.  It doesn’t appear to have an opposing effect, like other forces do.  Notwithstanding the possibility that dark energy is an example of “opposite polarity” gravity, possibly due to unseen dimensions, there appears to be no solid evidence of it as there is with all other forces.

– The strength of other forces are comparable in magnitude, while gravity checks in at 40 orders of magnitude weaker.

– The fine-tuned universe, a favorite topic of this site, includes some amazing gravity-based characteristics.  The balance of early universe expansion and gravitational strength had to balance to within 1 part in 1,000,000,000,000,000 in order for life to form.

The Anthropic Principle explains all this via a combination of the existance of zillions (uncountably large number) of parallel universes with the idea that we can only exist in the one where all the variables line up perfectly for matter and life to form.  But that seems to me to be a pretty complex argument with a few embedded leaps of faith that make most religions look highly logical in comparison.

Then there is the Programmed Reality theory, which as usual, offers a perfect explanation without the need for the hand-waving Anthropic Principle and the “Many Worlds”
interpretation of quantum mechanics.  Gravity is not like other forces, so let’s not keeping trying to “force” it to be (pardon the pun.)  Instead, it is there to keep us grounded on the planet in which we play out our reality, offering the perfect balance of “pull” to keep every fly ball from flying out of the stadium (regardless of the illegal substance abuse of the hitter), to make kite flying a real possibility, and to enable a large number of other enriching activities.  While, at the same time, being weak enough to allow basketball players to dunk and planes to fly, and to enable a large number of other enriching activities.  Our scientists will continue the investigate the nature of gravity via increasingly complex projects like the LHC, unpeeling the layers of complexity that the programmers put in place to keep scientific endeavor, research, and employment moving forward.

Newton's apple  Warped spacetime

Is Quantum Mechanics Deterministic after all?

Could Albert Einstein finally be vindicated?  His famous comment “God does not play dice” (actually, the correct and extended version, from a letter to Max Born in 1926, was “Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the ‘old one’. I, at any rate, am convinced that He does not throw dice”) referred to his belief that physical reality was deterministic at its core and that “hidden variables” that would describe deterministic reality were masked by the probablistic nature of Quantum Mechanics.  Most physicists have come to accept that quantum reality is probablistic.  But there have been a silent minority who maintained faith in the hidden variable idea.  A recent article in New Scientist discusses new research that may show that “quantum reality isn’t random, it just looks that way.”  Hoorah for determinism.

IMHO, I have always expected as much.  A random number generator appears random, but is fully deterministic.  Aren’t Boltzman’s laws of entropy probablistic on the surface but deterministic deep down?  We most certainly are not through uncovering the mysteries of subatomic particles.  The hidden variables may very well ultimately explain anomalies like entanglement.  And they may very well be the result of a programmed reality!