How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

PREVIOUS: How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

In Part 3 of this series on Surviving an AI Apocalypse, we examined some of the elements of AI-related publicity and propaganda that pervade the media these days and considered how likely they are. The conclusion was that while much has been overstated, there is still a real existential danger in the current path toward creating AGI, Artificial General Intelligence. In this and some subsequent parts of the series, we will look at several “AI Run Amok” scenarios and outcomes and categorize them according to likelihood and severity.

NANOTECH FOGLETS

Nanotech, or the technology of things at the scale of 10E-9 meters, is a technology originally envisioned by scientist Richard Feynman and popularized by K Eric Drexler in his book Engines of Creation. It has the potential to accomplish amazing things (think, solve global warming or render all nukes inert) but also, like any great technology, to lead to catastrophic outcomes.

Computer Scientist J Storrs Hall upped the ante on nanotech potential with the idea of “utility fog,” based on huge swarms of nanobots under networked AI-programmatic control.

With such a technology, one could conceivably do cool and useful things like press a button and convert your living room into a bedroom at night, as all of the nanobots reconfigure themselves into beds and nightstands, and then back to a living room in the morning.

And of course, like any new tech, utility fog could be weaponized – carrying toxic agents, forming explosives, generating critical nuclear reactions, blocking out the sun from an entire country, etc.  Limited only by imagination. Where does this sit in our Likelihood/Severity space?

I put it in the lower right because, while the potential consequences of foglets in the hands of a bad actor could be severe, it’s probably way too soon to worry about, such technology being quite far off. In addition, an attack could be defeated via a hack or a counter attack and, as with the cybersecurity battle, it will almost always be won by the entity with the deeper pockets, which will presumably be the world government by the time such tech is available.

GREY GOO

A special case of foglet danger is the concept of grey goo, whereby the nanobots are programmed with two simple instructions:

  • Consume what you can of your environment
  • Continuously self replicate and give your replicants the same instructions

The result would be a slow liquefaction of the entire world.

Let’s add this to our AI Run Amok chart…

I put it in the same relative space as the foglet danger in general, even less likely because the counter attack could be pretty simple reprogramming. Note, however, that this assumes that the deployment of such technologies, while AI-based at their core, is being done by humans. In the hands of an ASI, the situation would be completely different, as we will see.

ENSLAVEMENT

Let’s look at one more scenario, most aptly represented by the movie, The Matrix, where AI enslaved humanity to be used, for some odd reason, as a source of energy. Agent Smith, anyone?

There may be other reasons that AI might want to keep us around. But honestly, why bother? Sad to say, but what would an ASI really need us for?

So I put the likelihood very low. And frankly, if we were enslaved, Matrix-style, is the severity that bad? Like Cipher said, “Ignorance is bliss.”

If you’re feeling good about things now, don’t worry, we haven’t gotten to the scary stuff yet. Stay tuned.

In the next post, I’ll look at a scenario near and dear to all of our hearts, and at the top of the Likelihood scale, since it is already underway – Job Elimination.

NEXT: How to Survive an AI Apocalypse – Part 5: Job Elimination

WikiLeaks, Denial of Service Attacks, and Nanobot Clouds

The recent firestorm surrounding WikiLeaks reminds me of one of Neal Stephenson’s visions of the future, “Diamond Age,” written back in 1995.  The web was only at its infancy, but Stephenson had already envisioned massive clouds of networked nanobots, some under control of the government, some under control of other entities.  Such nanobot swarms, also known as Utility Fogs, could be made to do pretty much anything; form a sphere of protection, gather information, inspect people and report back to a central server, or be commanded to attack each other.  One swarm under control of one organization may be at war with another swarm under the control of another organization.  That is our future.  Nanoterrorism.

A distributed denial of service attack (DDoS) is a network attack on a particular server or internet node.  It is often carried out by having thousands of computers saturate the target machine with packet requests, making it impossible for the machine to respond to normal HTTP requests, effectively bringing it to its knees, inaccessible on the internet.  The attacks are often coordinated by a central source who takes advantage of networks of already compromised computers (aka zombie computers, usually unknown to their owners) via malware inflections.  On command, these botnets initiate their attack with clever techniques called Smurf attacks, Ping floods, SYN floods, and other scary sounding events.  An entire underground industry has built up around botnets, some of which can number in the millions.  Botnets can be leased by anyone who knows how to access them and has a few hundred dollars.  As a result, an indignant group can launch an attack on, say, the WikiLeaks site.  And, in response, a WikiLeak support group can launch a counter attack on its enemies, like MasterCard, Visa, and PayPal for their plans to terminate service for WikiLeaks.  That is our present.  Cyberterrorism.

Doesn’t it sound a lot like the nanoterrorism envisioned by Stephenson?  Except it is still grounded in the hardware.  As I see it, the equation of the future is:

Nanoterrorism = Cyberterrorism + Microrobotics + Moore’s Law + 20 years.

Can’t wait!

ddos2 nanobots2_185

Rewriting the Past

“I don’t believe in yesterday, by the way.”
-John Lennon

The past is set in stone, right?  Everything we have learned tells us that you can not change the past, 88-MPH DeLoreans notwithstanding.

However, it would probably surprise you to learn that many highly respected scientists, as well as a few out on the fringe, are questioning that assumption, based on real evidence.

For example, leading stem cell scientist, Dr. Robert Lanza, posits that the past does not really exist until properly observed.  His theory of Biocentrism says that the past is just as malleable as the future.

Specific experiments in Quantum Mechanics appear to prove this conjecture.  In the “Delayed Choice Quantum Eraser” experiment, “scientists in France shot photons into an apparatus, and showed that what they did could retroactively change something that had already happened.” (Science 315, 966, 2007)

Paul Davies, renowned physicist from the Australian Centre for Astrobiology at Macquarie University in Sydney, suggests that conscious observers (us) can effectively reach back in history to “exert influence” on early events in the universe, including even the first moments of time.  As a result, the universe would be able to “fine-tune” itself to be suitable for life.

Prefer the Many Worlds Interpretation (MWI) of Quantum Mechanics over the Copenhagen one?  If that theory is correct, physicist Saibal Mitra from the University of Amsterdam has shown how we can change the past by forgetting.  Effectively if the collective observers memory is reset prior to some event, the state of the universe becomes “undetermined” and can follow a different path from before.  Check out my previous post on that one.

Alternatively, you can disregard the complexities of quantum mechanics entirely.  The results of some macro-level experiments twist our perceptions of reality even more.  Studies by Helmut Schmidt, Elmar Gruber, Brenda Dunne, Robert Jahn, and others have shown, for example, that humans are actually able to influence past events (aka retropsychokinesis, or RPK), such as pre-recorded (and previously unobserved) random number sequences

Benjamin Libet, pioneering scientist in the field of human consciousness at  the University of California, San Francisco is well known for his controversial experiments that seem to show reverse causality, or that the brain demonstrates awareness of actions that will occur in the near future.  To put it another way, actions that occur now create electrical brain activity in the past.

And then, of course, there is time travel.  Time travel into the future is a fact, just ask any astronaut, all of whom have traveled nanoseconds into the future as a side effect of high speed travel.  Stephen Hawking predicts much more significant time travel into the future.  In the future.  But what about the past?  Turns out there is nothing in the laws of physics that prevents it.  Theoretical physicist Kip Thorne designed a workable time machine that could send you into the past.  And traveling to the past of course provides an easy mechanism for changing it.  Unfortunately this requires exotic matter and a solution to the Grandfather paradox (MWI to the rescue again here).

None of this is a huge surprise to me, since I question everything about our conventional views of reality.  Consider the following scenario in a massively multiplayer online role playing game (MMORPG) or simulation.  The first time someone plays the game, or participates in the simulation, there is an assumed “past” to the construct of the game.  Components of that past may be found in artifacts (books, buried evidence, etc.) scattered throughout the game.  Let’s say that evidence reports that the Kalimdors and Northrendians were at war during year 1999.  But the evidence has yet to be found by a player.  A game patch could easily change the date to 2000, thereby changing the past and no one would be the wiser.  But, what if someone had found the artifact, thereby setting the past in stone.  That patch could still be applied, but it would only be effective if all players who had knowledge of the artifact were forced to forget.  Science fiction, right?  No longer, thanks to an emerging field of cognitive research.  Two years ago, scientists were able to erase selected memories in mice.  Insertion of false memories is not far behind.  This will eventually perfected, and applied to humans.

At some point in our future (this century), we will be able to snort up a few nanobots, which will archive our memories, download a new batch of memories to the starting state of a simulation, and run the simulation.  When it ends, the nanobots will restore our old memories.

Or maybe this happened at some point in our past and we are really living the simulation.  There is really no way to tell.

No wonder the past seems so flexible.

back_to_the_future_poster_224

And I thought Nanobots Were Way off in the Future

Scientists from the International Center for Young Scientists have developed a rudimentary nano-scale molecular machine that is capable of generating the logical state machine necessary to direct and control other nano-machines.  This experiment demonstrates a nascent ability to manipulate, build, and control nano-devices, which are the fundamental premises for nanobot technology.  Other than perfecting these techniques, all that remains to achieve the so-called utility nanobot is the generation of light, wireless networking, and the ability to fly.

Harvard Microrobotics Laboratory developed a 3 cm 60 milligram robotic fly that had its first successful flight in 2007.  So, it seems that Moore’s law marches on in the world of microbiotics at a doubling of the miniaturization of flying robots every two years.  At this rate, we should get to 10 microns by the year 2030.  This is, of course, ignoring the fact that black ops military programs are generally considered to be at least 10 years ahead of commercial ventures.  Bring on the nano-wars!

More on Nanotech and the Physical Manifestation of a Reality

Nano DNA Robotic Fly