How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

PREVIOUS: How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

In Part 3 of this series on Surviving an AI Apocalypse, we examined some of the elements of AI-related publicity and propaganda that pervade the media these days and considered how likely they are. The conclusion was that while much has been overstated, there is still a real existential danger in the current path toward creating AGI, Artificial General Intelligence. In this and some subsequent parts of the series, we will look at several “AI Run Amok” scenarios and outcomes and categorize them according to likelihood and severity.

NANOTECH FOGLETS

Nanotech, or the technology of things at the scale of 10E-9 meters, is a technology originally envisioned by scientist Richard Feynman and popularized by K Eric Drexler in his book Engines of Creation. It has the potential to accomplish amazing things (think, solve global warming or render all nukes inert) but also, like any great technology, to lead to catastrophic outcomes.

Computer Scientist J Storrs Hall upped the ante on nanotech potential with the idea of “utility fog,” based on huge swarms of nanobots under networked AI-programmatic control.

With such a technology, one could conceivably do cool and useful things like press a button and convert your living room into a bedroom at night, as all of the nanobots reconfigure themselves into beds and nightstands, and then back to a living room in the morning.

And of course, like any new tech, utility fog could be weaponized – carrying toxic agents, forming explosives, generating critical nuclear reactions, blocking out the sun from an entire country, etc.  Limited only by imagination. Where does this sit in our Likelihood/Severity space?

I put it in the lower right because, while the potential consequences of foglets in the hands of a bad actor could be severe, it’s probably way too soon to worry about, such technology being quite far off. In addition, an attack could be defeated via a hack or a counter attack and, as with the cybersecurity battle, it will almost always be won by the entity with the deeper pockets, which will presumably be the world government by the time such tech is available.

GREY GOO

A special case of foglet danger is the concept of grey goo, whereby the nanobots are programmed with two simple instructions:

  • Consume what you can of your environment
  • Continuously self replicate and give your replicants the same instructions

The result would be a slow liquefaction of the entire world.

Let’s add this to our AI Run Amok chart…

I put it in the same relative space as the foglet danger in general, even less likely because the counter attack could be pretty simple reprogramming. Note, however, that this assumes that the deployment of such technologies, while AI-based at their core, is being done by humans. In the hands of an ASI, the situation would be completely different, as we will see.

ENSLAVEMENT

Let’s look at one more scenario, most aptly represented by the movie, The Matrix, where AI enslaved humanity to be used, for some odd reason, as a source of energy. Agent Smith, anyone?

There may be other reasons that AI might want to keep us around. But honestly, why bother? Sad to say, but what would an ASI really need us for?

So I put the likelihood very low. And frankly, if we were enslaved, Matrix-style, is the severity that bad? Like Cipher said, “Ignorance is bliss.”

If you’re feeling good about things now, don’t worry, we haven’t gotten to the scary stuff yet. Stay tuned.

In the next post, I’ll look at a scenario near and dear to all of our hearts, and at the top of the Likelihood scale, since it is already underway – Job Elimination.

NEXT: How to Survive an AI Apocalypse – Part 5: Job Elimination

Will Evolving Minds Delay The AI Apocalypse? – Part I

Stephen Hawking once warned that “the development of full artificial intelligence could spell the end of the human race.” He went on to explain that AI will “take off on its own and redesign itself at an ever-increasing rate,” while “humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” He is certainly not alone in his thinking, as Elon Musk, for example, cautions that “With artificial intelligence we are summoning the demon.”

In fact, this is a common theme not only in Hollywood, but also between two prominent groups of philosophers and futurists.   One point of view is that Artificial General Intelligence (AGI) will become superintelligent and beyond the control of humans, resulting in all sorts of extinction scenarios (think SkyNet or Grey Goo). The (slightly) more optimistic point of view, held by the transhumanists, is that humanity will merge with advanced AI and form superhumans. So, while biological dumb humanity may go the way of the dodo bird, the new form of human-machine hybrid will continue to advance and rule the universe. By the way, this is supposed to happen around 2045, according to Ray Kurzweil in his 2005 book “The Singularity is Near.”

There are actually plenty of logical and philosophical arguments against these ideas, but this blog is going to focus on something different – the nature of the human mind.

The standard theory is that humans cannot evolve their minds particularly quickly due to the assumption that we are limited by the wiring in our brains. AI, on the other hand, has no such limitations and, via recursive self-improvement, will evolve at a runaway exponential rate, making it inevitable to take over humans at some point in terms of intelligence.

But does this even make sense? Let’s examine both assumptions.

The first assumption is that AI advancements will continue at an exponential pace. This is short-sighted IMHO. Most exponential processes run into negative feedback effects that eventually dampen the acceleration. For example, exponential population growth occurs in bacterial colonies until the environment reaches its carrying capacity and then it levels off. We simply don’t know what the “carrying capacity” is of an AI. In an analogous manner, it has to run in some environment, which may run out of memory, power, or other resources at some point. Moore’s Law, the idea that transistor density doubles every two years, has been applied to many other technology advances, such as CPU speed and networking bit rates, and is the cornerstone of the logic behind the Singularity. However, difficulties in heat dissipation have now slowed down the rate of advances in CPU speed, and Moore’s Law no longer applies. Transistor density is also hitting its limit as transistor junctions are now only a few atoms thick. Paul Allen argues, in his article “The Singularity Isn’t Near,” that the kinds of learning required to move AI ahead do not occur at exponential rates, but rather in an irregular and unpredictable manner. As things get more complex, progress tends to slow, an effect he calls the Complexity Brake.

Let’s look at one example. Deep Blue beat Garry Kasparov in a game in 1996, the first time a machine beat a world Chess champion. Google’s AlphaGo beat a grandmaster at Go for the first time in 2016. In those 20 years, there are 10 2-year doubling cycles in Moore’s Law, which would imply that, if AI were advancing exponentially, the “intelligence” needed to beat a Go master is 1000 times more than the intelligence needed to beat a Chess master. Obviously this is ridiculous. While Go is theoretically a more complex game than Chess because it has many more possible moves, an argument could be made that the intellect and mastery required to become the world champion at each game is roughly the same. So, while the advances in processing speed and algorithmic development (Deep Blue used a brute force algorithm, while AlphaGo did more pattern recognition) were substantial between 1996 and 2016, they don’t really show much advance in “intelligence.”

It would also be insightful to examine some real estimates of AI trends. For some well-researched data, consider Stanford University’s AI Index. Created and launched as a project at Stanford University, the AI Index is an “open, not-for-profit project to track activity and progress in AI.” In their 2017 report,  they identify metrics for the progress made in several areas of Artificial Intelligence, such as object detection, natural language parsing, language translation, speech recognition, theorem proving, and SAT solving. For each of the categories for which there is at least 8 years of data, I normalized the AI performance and calculated the improvements over time and averaged the results (note: I was even careful to invert the data – for example, for a pattern recognition algorithm to improve from 90% accuracy to 95%, this is not a 5% improvement; it is actually a 100% improvement in the ability to reject false positives). The chart below shows that AI is not advancing nearly as quickly as Moore’s Law.

Advancing Artificial Intelligence

Figure 1 – Advancing Artificial Intelligence

In fact, the doubling period is about 6 years instead of 2, which would suggest that we need 3 times as long before hitting the Singularity as compared to Kurzweil’s prediction. Since the 2045 projection for the Singularity occurred in 2005, this would say that we wouldn’t really see it until 2125. That’s assuming that we keep pace with the current rate of growth of AI, and don’t even hit Paul Allen’s Complexity Brake. So, chances are it is much further off than that. (As an aside, according to some futurists, Ray does not have a particularly great success rate for his predictions, even ones that are only 10 years out.

But a lot can happen in 120 years. Unexpected, discontinuous jumps in technology can accelerate the process. Social, economic, and political factors can severely slow it down. Recall how in just 10 years in the 1960s, we figured out how to land a man on the moon. Given the rate at which we were advancing our space technology and applying Moore’s Law (which was in effect at that time), it would not have been unreasonable to expect a manned mission to Mars by 1980. In fact Werner von Braun, the leader of the American rocket team, predicted after the moon landing that we would be on Mars in the early 1980s. But in the wake of the Vietnam debacle, public support for additional investment in NASA waned and the entire space program took a drastic turn. Such factors are probably even more impactful to the future of AI than the limitations of Moore’s Law.

The second assumption we need to examine is that the capacity of the human mind is limited by the complexity of the human brain, and is therefore relatively fixed. We will do that in Part II of this article.