How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

PREVIOUS: How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

In Part 2 of this series on Surviving an AI Apocalypse, we examined the landscape of AI and attempted to make sense of the acronym jungle. In this part, in order to continue to develop our understanding of the beast, we will examine some of the elements of publicity and propaganda that pervade the media these days and consider how likely they are. Once we have examined the logical arguments, Dr. Einstein will be our arbiter of truth. Let’s start with the metaphysical descriptors. Could an AI ever be sentient? Conscious? Self-aware? Could it have free will?

CAN AN AI HAVE CONSCIOUSNESS OR FREE WILL?

Scientists and philosophers can’t even agree on the definition of consciousness and whether or not humans have free will, so how could they possibly come to a conclusion about AIs? Fortunately, your truly has strong opinions on the matter.

According to philosophical materialists, reality is ultimately deterministic. Therefore, nothing has free will. To these folks, there actually isn’t any point to their professions, since everything is predetermined. Why run an experiment? Why theorize? What will be has already been determined. This superdeterminism is a last ditch effort for materialists to cling to the idea of an objective reality, because Bell’s Theorem and all of the experiments done since Bell’s Theorem have proven one of two things, either: 1. There is no objective reality, or 2: There is no free will. I gave (what I think are) strong arguments for the existence of free will in all conscious entities in both of my books, The Universe-Solved! and Digital Consciousness. And the support for our reality being virtual and our consciousness being separate from the brain is monumental: The Observer Effect, near death experiences, out of body experiences, simulation arguments, Hoffman’s evolutionary argument against reality, xenoglossy, the placebo effect… I could go on.

To many, consciousness, or the state of being aware of your existence, is simply a matter of complexity. Following this logic, everything has some level of consciousness, including the coaster under your coffee mug. Also know as panpsychism, it’s actually a reasonable idea. Why would there exist some arbitrary threshold of complexity, which, once crossed by a previously unconscious entity, it becomes conscious? it makes much more sense that consciousness is a continuum, or a spectrum, not unlike light, or intelligence. As such, an AI could certainly be considered conscious.

But what do we really mean when we say “conscious?” What we don’t mean is that we simply have sensors that tell some processing system that something is happening inside or outside of us. What we mean is deeper than that – life, a soul, an ability to be self-aware because we want to be, and have the free will to make that choice. AI will never achieve that because it is ultimately deterministic. Some may argue that neural nets are not deterministic, but that is just semantics. For certain, they are not predictable, but only because the system is too complex and adaptive to analyze sufficiently at any exact point in time. Determinism means no free will.

The point is that it really doesn’t matter whether or not you believe that AIs develop “free will” or some breakthrough level of consciousness – what matters is that they are not predictable. Do you agree, Albert?

IS AGI RIGHT AROUND THE CORNER?

This is probably the most contentious question out there. Let’s see how well the predictions have held up over the years.

In 1956, ten of the leading experts in the idea of machine intelligence got together for an eight-week project at Dartmouth University to discuss computational systems, natural language processing, neural networks, and other related topics. They coined the term Artificial Intelligence and so this event is generally considered the birth of the idea. They also made some predictions about when AGI, Artificial General Intelligence, would occur. Their prediction was “20 years away,” a view that has had a lot of staying power. Until only recently.

Historical predictions for AGI:

That’s right, in early 2023, tech entrepreneur and developer, Siqi Chen, claimed that GPT-5 “will” achieve Artificial General Intelligence (AGI) by the end of 2023. Didn’t happen, and won’t this year either. Much of this hype was due to the dramatic ChatGPT performance that came seemingly out of nowhere in early 2023. As with all things hyped, though, claims are expected to be greatly exaggerated. The ability for an AI to “pass the Turing test” (which is what most people are thinking) does not equate with AGI – it doesn’t even mean intelligence, in the sense of what humans have. Much more about this later. All of that said, AGI, in the strict sense of being able to do all intelligent tasks that a human can, is probably going to happen soon. Maybe not, this year, but maybe within five. What say you, Albert?

IS AI GOING TO BECOME BILLIONS OF TIMES SMARTER THAN HUMANS?

Well, mainstream media certainly seems to think so. Because they confuse intelligence with things that have nothing to do with intelligence.

If processing speed is what makes intelligence, then your smart toaster is far brighter than you are. Ditto for recall accuracy as an intelligence metric. We only retain half of what we learned yesterday, and it degrades exponentially over time. Not so with the toaster. If storage is the metric, cloud storage giant, Amazon Web Services, would have to be fifty times smarter than we are. 

However, the following word cloud captures the complexity behind the kind of intelligence that we have.

That said, processing speed is not to be underestimated, as it is at the root of all that can go wrong. The faster the system, the sooner its actions can go critical. In Nick Bostrom’s book Superintelligence, he references the potential “superpowers” that can be attained by an AGI that is fast enough to become an ASI, Artificial Superintelligence. Intelligence amplification, for example, is where an AI can bootstrap its own intelligence. As it learns to improve its ability to learn, it will develop exponentially. In the movie Her, the Operating System, Samantha, evolved so quickly that it got bored being with one person and overnight began to interact with another 8316 people.

Another superpower is the ability to think far ahead and strategize. Humans can think ahead 10-15 moves in a game of chess, but not in an exhaustive or brute force manner, rather by a few single threaded sequences. Early chess-playing AIs played differently, doing brute force calculations of all possible sequences 3 or 4 moves ahead, and picking the one that led to the most optimal outcome. Nowadays, AI systems designed for chess can think ahead 20 moves, due mostly to the speed improvements in the underlying system. As this progresses, strategizing will be a skill that AIs can do better than humans.

Social manipulation for escaping human control, getting support, and encouraging desired courses of action coupled with hacking capabilities for stealing hardware, money and infrastructure, and escaping human control are the next superpowers than an AGI could possess. If you think otherwise, recall from Part 2 of this series that AIs have already been observed gaming specifications, or the rules under which their creators thought they were programmed. They have also unexpectedly developed apparent cognitive skills, like Theory of Mind. So their ability to get around rules to achieve their objective is already in place.

Bostrom adds technology research and economic productivity as advanced superpowers attainable by an ASI, resulting in the ability to create military forces, surveillance, space transport, or simply generating money to buy influence.

How long might it take for an AGI to evolve to an ASI? Wait But Why blogger Tim Urban posted a provocative image that shows the possibility of it happening extremely quickly. Expert estimates vary widely, from hours (an in Her) to many years.

Bostrom’s fear is that the first AGI that makes the jump will become a singleton, acquiring all resources and control. Think SkyNet. So, Albert, given all of this, will AIs soon become billions of times smarter than humans, as CNN reports?

COULDN’T WE JUST PULL THE PLUG IF THINGS START GOING SOUTH?

Yeah, why not just unplug it? To get a sense for the answer to that question, how would you unplug Google? Google’s infrastructure, shown below, comprises over 100 points of presence in 18 geographical zones. Each one has high availability technology and redundant power.

Theoretically an advanced AI could spread its brain across any number of nodes worldwide, some of which may be be solar powered, others of which may be in control of the power systems.  By the time AGI is real, high availability technology will be far advanced. You see the problem. Thoughts, Dr. Einstein?

Now that we understand the nature of the beast, and have an appreciation for the realistic capabilities of our AI frenemy, we can take a look at a possible apocalyptic scenario, courtesy of Nick Bostrom’s book, Superintelligence. Below can be seen a possible sequence of events that lead an AGI to essentially take over the world. I recommend reading the book for the details. Bostrom is a brilliant guy, and also the one who authored The Simulation Argument, which has gotten all manner of scientists, mathematicians, and philosophers in a tizzy over its logic and implications, so it is worth taking seriously.

And think of some of the technologies that we’ve developed that facilitate an operation like this… cloud computing, drones, digital financial services, social media. It all plays very well. In the next post, we will begin to examine all sorts of AI-run-amok scenarios, and assess the likelihood and severity of each.

NEXT: How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

Nick Bostrom Elon Musk Nick Bostrom Elon Musk

OMG can anyone write an article on the simulation hypothesis without focusing on Nick Bostrom and Elon Musk? It’s like writing an article about climate change and only mentioning Al Gore.

Dear journalists who are trying to be edgy and write about cool fringe theories, please pay attention. The idea that we might be living in an illusory world is not novel. Chinese philosopher Zhuangzi wrote about it with his butterfly dream in 369 BC. Plato discussed his cave allegory in 380 BC. The other aspect of simulation theory, the idea that the world is discrete or digital, is equally ancient. Plato and Democritous considered atoms, and therefore the fundamental constructs of reality, to be discrete.

I’m not taking anything away from Nick Bostrom, who is a very intelligent modern philosopher. His 2001 Simulation Argument is certainly thought provoking and deserves its place in the annals of digital philosophy. But it was predated by “The Matrix”. Which was predated by Philip K. Dick’s pronouncement in 1977 that we might be living in a computer-programmed reality. Which was predated by Konrad Zuse’s 1969 work on discrete reality, “Calculating Space.”

And as interesting as Bostrom’s Simulation Argument is, it was a 12-page paper on a single idea. Since then, he has not really evolved his thinking on digital philosophy, preferring instead to concentrate on existential risk and the future of humanity.

Nor am I taking anything away from Elon Musk, a brilliant entrepreneur who latched onto Bostrom’s idea for a few minutes, generated a couple sound bites, and then it was back to solar panels and hyperloops.

But Bostrom, Musk, and the tired old posthuman-generated simulation hypothesis is all that the rank and file of journalists seem to know to write about. It is really sad, considering that Tom Campbell wrote an 800-page treatise on the computational nature of reality. I have written two books on the subject. Both of our material is largely consistent and has evolved the thinking far beyond the idea that we live in a posthuman-generated simulation. In fact, I provide a great deal of evidence that the Bostrom-esque possibility is actually not very likely. And Brian Whitworth has a 10-year legacy of provocative scientific papers on evidence for a programmed reality that are far beyond the speculations of Musk and Bostrom.

The world need to know about these things and Campbell, Whitworth, and I can’t force people to read our books, blogs, and papers. So journalists, with all due respect, please up your simulation game.