How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

PREVIOUS: How to Survive an AI Apocalypse – Part 1: Intro

As I mentioned in the first part of this series, in order to make any kinds of predictions about the future of AI, we must understand what Artificial Intelligence means. Unfortunately, there is so much confusing information out there. LLMs, GPTs, NAIs, AGIs, machine learning – what does it all mean? One expert say AGI will be here by the end of the year; another expert says it will never come.

Here is a simplified Venn diagram that might help to make some sense out of the landscape…

AIs are all computer programs, but, while it might be obvious, not all computer programs are AI. AI refers to programs that emulate human thinking and behavior. So, while your calculator or smart toaster might be doing some limited thinking, it isn’t really trying to be human; it is simply performing a task. AIs are generally considered to be broken into two categories – NAIs (Narrow AI) or AGI (Artificial General Intelligence).

NAIs are the ones we are all familiar with and are typically loosely categorized further: NLPs (Natural Language Processing, like Siri and Alexa, Robotics, Machine Learning (like how Spotify and Netflix learn your tastes and offer suggestions), Deep Learning, and LLMs (Large Language Models). Deep Learning systems emulate human neural networks and can complete tasks with poorly defined data and little human guidance; an example would be AlphaGo. LLMs are neural networks with many parameters (often billions), that are trained on large sets of unlabeled text using self-supervised learning. Generative Pre-trained transformers (GPTs) are a subset of LLMs and are able to generate novel human-like text, images, or even videos. ChatGPT, DALL-E, and Midjourney are examples of GPTs. The following pictures are examples of imagery created by Midjourney for my upcoming book, “Level 5.

AGIs are the ones we need to worry about, because they have a capacity to act like a human, but not really a human. Imagine giving human intelligence to an entity that has A: No implicit sense of morality or values (at least none that would make any sense to us), and B: A completely unpredictable nature. What might happen?

Well, here’s an example…

Oh, that would never happen, right? Read on…

There are thousands of examples of AIs “thinking” creatively – more creatively in fact than their creators ever imagined. Pages and pages of specification gaming examples have been logged. These are cases where the AI “gets around” the programming limitations that were imposed by the creators of the system. A small sample set is shown below:

Another example of the spontaneous emergence of intelligence involves what are known as Theory of Mind tasks. These are cognitive developments in children that reflect the understanding of other people’s mental processes. As the research in the adjacent figure demonstrates, various GPTs have unexpectedly developed such capabilities; in fact, what typically takes humans 9 years to learn has taken the AIs only 3.

These unexpected spontaneous bursts of apparent intelligence are interesting, but as we will see, they aren’t really intelligence per se. Not that it matters, if what you are worried about are the doomsday scenarios. The mere fact that they are unpredictable or non-deterministic is exactly what is frightening. So how does that happen?

There are multiple mechanisms for these spontaneous changes in intelligence. One is the Neural Net. Neural nets, while ultimately deterministic deep down, are “apparently” non-deterministic because they are not based on any programming rules. If sufficiently complex and with feedback, they are impossible to predict, at least by humans.

As shown, they consist of some input nodes and output nodes, but contain hidden layers of combinatorial arithmetic operations, which makes them nearly impossible to predict. I programmed neural nets many years ago, in an attempt to outsmart the stock market. I gave up when they didn’t do what I wanted and moved on to other ideas (I’m still searching).

Another unpredictability mechanism is the fact that not only can AIs write software very well (DeepMinds AlphaCode outperformed 47% of all human developers in 2022), they can rewrite their own software. So, blending the unpredictable nature of neural nets, the clever specification gaming capabilities that AIs have demonstrated, and their ability to rewrite their own code, we ultimately don’t really know how an AGI is going to evolve and what it might do.

The last piece of the Venn Diagram and acronym jumble is the idea of ASI – Artificial Superintelligence. This is what will happen when AGI takes over its own evolution and “improves” itself at an exponential rate, rapidly becoming far more intelligent than humans. At this point, speculate the doomsayers, ASI may treat humans the way we treat microorganisms – with complete disregard for our well being and survival.

With these kinds of ideas bantered about, it is no wonder that the media hypes Artificial Intelligence. In the next post, I’ll examine the hype and try to make sense of some of the pesky assumptions.

NEXT: How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

How to Survive an AI Apocalypse – Part 1: Intro

It has certainly been a while since I wrote a blog, much to the consternation of many of my Universe-Solved! Forum members. A few years of upheaval – Covid, career pivots, new home, family matters, writing a new book – it was easy to not find the time. Doesn’t mean the brain hasn’t been working though.

Emerging from the dust of the early 20’s was an old idea, dating back to 1956, but renewed and invigorated by Moore’s Law – Artificial Intelligence. Suddenly in the mainstream of the public psyche, courtesy mostly of ChatGPT, social media was suddenly abuzz with both promising new opportunities as well as fears of Skynet, grey goo, and other apocalyptic scenarios, fueled by AI run amok. I had the pleasure of being asked to contribute to last year’s Contact in the Desert conference and chose as one of my topics “How to Survive an AI Apocalypse.” It’s a little tangential to my usual fare of simulation theory and quantum anomaly explanations, but there turns out to be some very important connections between the concepts.

In this multi-part series, I will give some thought to some of those AI run amok scenarios, examining the nature, history, and assumptions around AI, the recommended alignment protocols, and how it fits with the simulation model, which is rapidly becoming accepted as a highly likely theory of reality. So let’s get started…

Eliezer Yudkowsky is the founder and head of Machine Intelligence Research Institute, in Berkeley, CA. His view on the future of humanity is rather bleak: “The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ … If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”

Or, rather, if you prefer your doomsaying to come from highly distinguished mainstream scientists, there is always Mr. Stephen Hawking: The development of full artificial intelligence could spell the end of the human race. At least he said could, right?

Yikes!

How likely is this, really? And can it be mitigated?

I did a bit of research and found the following suggestions for avoiding such a scenario:

  1. Educate yourself – learn as much as you can about AI technology and its potential implications. Understanding the technology can help you make informed decisions about its use.
  2. Support responsible AI development – choose to support companies and organizations that prioritize responsible AI development and are committed to ethical principles
  3. Advocate for regulation – Advocate for regulatory oversight of AI technology to ensure that it is developed and used in a safe and responsible manner.
  4. Encourage transparency – Support efforts to increase transparency in AI development and deployment, so that the public can have a better understanding of how AI is being used and can hold companies accountable for their actions.
  5. Promote diversity and inclusion – Encourage diversity and inclusion in the development of AI technology to ensure that it reflects the needs and values of all people.
  6. Monitor the impact of AI – Stay informed about the impact of AI technology on society, and speak out against any negative consequences that arise

Mmm, wait a minute, these suggestions were generated by ChatGPT, which is a little like a fish asking a shark which parts of the ocean to stay away from to avoid being eaten by a shark. Maybe that’s not the best advice. Let’s dig a little deeper, and attempt to understand it…

NEXT: How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

How to Survive an AI Apocalypse Series: