How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

PREVIOUS: How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

In Part 2 of this series on Surviving an AI Apocalypse, we examined the landscape of AI and attempted to make sense of the acronym jungle. In this part, in order to continue to develop our understanding of the beast, we will examine some of the elements of publicity and propaganda that pervade the media these days and consider how likely they are. Once we have examined the logical arguments, Dr. Einstein will be our arbiter of truth. Let’s start with the metaphysical descriptors. Could an AI ever be sentient? Conscious? Self-aware? Could it have free will?

CAN AN AI HAVE CONSCIOUSNESS OR FREE WILL?

Scientists and philosophers can’t even agree on the definition of consciousness and whether or not humans have free will, so how could they possibly come to a conclusion about AIs? Fortunately, your truly has strong opinions on the matter.

According to philosophical materialists, reality is ultimately deterministic. Therefore, nothing has free will. To these folks, there actually isn’t any point to their professions, since everything is predetermined. Why run an experiment? Why theorize? What will be has already been determined. This superdeterminism is a last ditch effort for materialists to cling to the idea of an objective reality, because Bell’s Theorem and all of the experiments done since Bell’s Theorem have proven one of two things, either: 1. There is no objective reality, or 2: There is no free will. I gave (what I think are) strong arguments for the existence of free will in all conscious entities in both of my books, The Universe-Solved! and Digital Consciousness. And the support for our reality being virtual and our consciousness being separate from the brain is monumental: The Observer Effect, near death experiences, out of body experiences, simulation arguments, Hoffman’s evolutionary argument against reality, xenoglossy, the placebo effect… I could go on.

To many, consciousness, or the state of being aware of your existence, is simply a matter of complexity. Following this logic, everything has some level of consciousness, including the coaster under your coffee mug. Also know as panpsychism, it’s actually a reasonable idea. Why would there exist some arbitrary threshold of complexity, which, once crossed by a previously unconscious entity, it becomes conscious? it makes much more sense that consciousness is a continuum, or a spectrum, not unlike light, or intelligence. As such, an AI could certainly be considered conscious.

But what do we really mean when we say “conscious?” What we don’t mean is that we simply have sensors that tell some processing system that something is happening inside or outside of us. What we mean is deeper than that – life, a soul, an ability to be self-aware because we want to be, and have the free will to make that choice. AI will never achieve that because it is ultimately deterministic. Some may argue that neural nets are not deterministic, but that is just semantics. For certain, they are not predictable, but only because the system is too complex and adaptive to analyze sufficiently at any exact point in time. Determinism means no free will.

The point is that it really doesn’t matter whether or not you believe that AIs develop “free will” or some breakthrough level of consciousness – what matters is that they are not predictable. Do you agree, Albert?

IS AGI RIGHT AROUND THE CORNER?

This is probably the most contentious question out there. Let’s see how well the predictions have held up over the years.

In 1956, ten of the leading experts in the idea of machine intelligence got together for an eight-week project at Dartmouth University to discuss computational systems, natural language processing, neural networks, and other related topics. They coined the term Artificial Intelligence and so this event is generally considered the birth of the idea. They also made some predictions about when AGI, Artificial General Intelligence, would occur. Their prediction was “20 years away,” a view that has had a lot of staying power. Until only recently.

Historical predictions for AGI:

That’s right, in early 2023, tech entrepreneur and developer, Siqi Chen, claimed that GPT-5 “will” achieve Artificial General Intelligence (AGI) by the end of 2023. Didn’t happen, and won’t this year either. Much of this hype was due to the dramatic ChatGPT performance that came seemingly out of nowhere in early 2023. As with all things hyped, though, claims are expected to be greatly exaggerated. The ability for an AI to “pass the Turing test” (which is what most people are thinking) does not equate with AGI – it doesn’t even mean intelligence, in the sense of what humans have. Much more about this later. All of that said, AGI, in the strict sense of being able to do all intelligent tasks that a human can, is probably going to happen soon. Maybe not, this year, but maybe within five. What say you, Albert?

IS AI GOING TO BECOME BILLIONS OF TIMES SMARTER THAN HUMANS?

Well, mainstream media certainly seems to think so. Because they confuse intelligence with things that have nothing to do with intelligence.

If processing speed is what makes intelligence, then your smart toaster is far brighter than you are. Ditto for recall accuracy as an intelligence metric. We only retain half of what we learned yesterday, and it degrades exponentially over time. Not so with the toaster. If storage is the metric, cloud storage giant, Amazon Web Services, would have to be fifty times smarter than we are. 

However, the following word cloud captures the complexity behind the kind of intelligence that we have.

That said, processing speed is not to be underestimated, as it is at the root of all that can go wrong. The faster the system, the sooner its actions can go critical. In Nick Bostrom’s book Superintelligence, he references the potential “superpowers” that can be attained by an AGI that is fast enough to become an ASI, Artificial Superintelligence. Intelligence amplification, for example, is where an AI can bootstrap its own intelligence. As it learns to improve its ability to learn, it will develop exponentially. In the movie Her, the Operating System, Samantha, evolved so quickly that it got bored being with one person and overnight began to interact with another 8316 people.

Another superpower is the ability to think far ahead and strategize. Humans can think ahead 10-15 moves in a game of chess, but not in an exhaustive or brute force manner, rather by a few single threaded sequences. Early chess-playing AIs played differently, doing brute force calculations of all possible sequences 3 or 4 moves ahead, and picking the one that led to the most optimal outcome. Nowadays, AI systems designed for chess can think ahead 20 moves, due mostly to the speed improvements in the underlying system. As this progresses, strategizing will be a skill that AIs can do better than humans.

Social manipulation for escaping human control, getting support, and encouraging desired courses of action coupled with hacking capabilities for stealing hardware, money and infrastructure, and escaping human control are the next superpowers than an AGI could possess. If you think otherwise, recall from Part 2 of this series that AIs have already been observed gaming specifications, or the rules under which their creators thought they were programmed. They have also unexpectedly developed apparent cognitive skills, like Theory of Mind. So their ability to get around rules to achieve their objective is already in place.

Bostrom adds technology research and economic productivity as advanced superpowers attainable by an ASI, resulting in the ability to create military forces, surveillance, space transport, or simply generating money to buy influence.

How long might it take for an AGI to evolve to an ASI? Wait But Why blogger Tim Urban posted a provocative image that shows the possibility of it happening extremely quickly. Expert estimates vary widely, from hours (an in Her) to many years.

Bostrom’s fear is that the first AGI that makes the jump will become a singleton, acquiring all resources and control. Think SkyNet. So, Albert, given all of this, will AIs soon become billions of times smarter than humans, as CNN reports?

COULDN’T WE JUST PULL THE PLUG IF THINGS START GOING SOUTH?

Yeah, why not just unplug it? To get a sense for the answer to that question, how would you unplug Google? Google’s infrastructure, shown below, comprises over 100 points of presence in 18 geographical zones. Each one has high availability technology and redundant power.

Theoretically an advanced AI could spread its brain across any number of nodes worldwide, some of which may be be solar powered, others of which may be in control of the power systems.  By the time AGI is real, high availability technology will be far advanced. You see the problem. Thoughts, Dr. Einstein?

Now that we understand the nature of the beast, and have an appreciation for the realistic capabilities of our AI frenemy, we can take a look at a possible apocalyptic scenario, courtesy of Nick Bostrom’s book, Superintelligence. Below can be seen a possible sequence of events that lead an AGI to essentially take over the world. I recommend reading the book for the details. Bostrom is a brilliant guy, and also the one who authored The Simulation Argument, which has gotten all manner of scientists, mathematicians, and philosophers in a tizzy over its logic and implications, so it is worth taking seriously.

And think of some of the technologies that we’ve developed that facilitate an operation like this… cloud computing, drones, digital financial services, social media. It all plays very well. In the next post, we will begin to examine all sorts of AI-run-amok scenarios, and assess the likelihood and severity of each.

NEXT: How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

Explaining Daryl Bem’s Precognition

Dr. Daryl Bem, Professor Emeritus of Psychology at Cornell University recently published an astounding paper in the Journal of Personality and Social Psychology called “Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect.”  In plain English, he draws on the results of eight years of scientific research to prove that precognition exists.  His research techniques utilized proven scientific methods, such as double blind studies.  According to New Scientist magazine, in each case, he reversed the sequence of well-studied psychological phenomena, so that “the event generally interpreted as the cause happened after the tested behaviour rather than before it.”  Across all of the studies, the probability of these results occurring by chance and not due to a real precognitive effect was calculated to be about 1 in 100 billion.

This little scientific tidbit went viral quickly with the Twitterverse and Reddit communities posting and blogging prolifically about it.  We have to commend the courage that Dr. Bem had in submitting such an article and that the APA (American Psychological Association) had in accepting it for publication.  Tenures, grants, and jobs have been lost for far less of an offense to the often closed-minded scientific/academic community.  Hopefully, this will open doors to a greater acceptance of Dean Radin’s work on other so-called “paranormal” effects as well as Pim van Lommel’s research on Near Death Experiences.

More to the point, though, this has many scientists scratching their heads.  What could it mean about our reality?  Quantum physicists say that reality doesn’t really exist anyway, but most scientists from other fields have compartmentalized such ideas to a tiny corner of their awareness labelled “quantum effects that do not apply to the macroscopic world.”  Guess what?  There isn’t a line demarking quantum and macroscopic, so we need to face the facts.  The world isn’t as it seems and Daryl Bern’s research is probably just the tip of the iceberg.

OK, what could explain this?

Conventional wisdom would have to conclude that we do not have free will.  Let’s take a particular experiment to see why:

“In one experiment, students were shown a list of words and then asked to recall words from it, after which they were told to type words that were randomly selected from the same list. Spookily, the students were better at recalling words that they would later type.”

Therefore, if students could recall words better before the causative event even happened, then that seems to imply that they are not really in control of their choices, and hence have no free will.

However, our old friend Programmed Reality, again comes to the rescue and offers not one, not two, but three different explanations for these results.  Imagine that our reality is generated by a computational mechanism, as shown in the figure below.

programmedreality

Part of what constitutes our reality would also be our bodies and our brain stuff – neurons, etc.  In addition, assume that that “Computer” reads our consciousness as its input and makes decisions based both on the current state of reality, as well as the state of our consciousnesses.  In such case, consider these three possible explanations:

1. Evidence is rewritten after the fact.  In other words, after the students are told the words to type, the Program goes back and rewrites all records of the student’s guesses, so as to create the precognitive anomaly.  Those records consist of the students and the experimenters memories, as well as any written or recorded artifacts.  Since the Program is in control of all of these items, the complete record of the past can be changed, and no one would ever know.

2. The Program selects the randomly typed words to match the results, so as to generate the precognitive anomaly.

3. We live in an Observer-created reality and the entire sequence of events is either planned out or influenced by intent, and then just played out by the experimenter and students.

Mystery solved, Programmed Reality style.

 

billmurray185

Musings on the idea of Free Will

Think about what it means to make a decision.  The cashier gave you too much change – do you tell him/her?  It seems like you make your choice based on past events (your parents taught you that it was deceitful to take something that shouldn’t be yours) or the current state of your mind (the cashier is really cute, maybe I’ll get a few points by pointing out the mistake).  Upon further analysis, it really seems that the exact state of your brain (memories, neural pathways and triggers) and the state of external stimuli might be fully responsible for each decision and action.  However, one could make the same argument for a computer, which function is based on the concept of a finite state machine (each action is fully determined by the state of the machine and its inputs).  This idea essentially boils down to us being nothing more than robots.  Are you okay with that?

What about the following scenario:

Two kids with the same parents grow up in the same environment.  Why do they frequently have a completely different set of values?  One gives the money back to the cashier without question; one keeps the money without question.  Why?  It can’t be purely due to genetics.  And it can’t be purely due to upbringing.  Determinists would argue that slight differences in genetics or environment may have a domino effect on the value systems of the individual.  But, could it also be due to the possibility that these are two different souls, which have evolved differently?  Believers in reincarnation might say that the former has learned a universal lesson in a previous incarnation and is perhaps an older, or more experienced, soul.  It is therefore natural for that person to make such a decision, whereas the sibling’s soul has not yet learned that universal lesson.  We can’t be sure, but it does seem odd that people often talk of the deep personality differences between their children that are observable at such a young age that environmental differences are precluded.  This tends to lend support to the idea that there is a “ghost in the machine.”