How to Survive an AI Apocalypse – Part 10: If You Can’t Beat ’em, Join ’em

PREVIOUS: How to Survive an AI Apocalypse – Part 9: The Stabilization Effect

In this marathon set of AI blogs, we’ve explored some of the existential dangers of ASI (Artificial Superintelligence) as well as some of the potential mitigating factors. It seems to me that there are three ways to deal with the coming upheaval that the technology promises to lead to…

Part 8 was for you Neo’s out there, while Part 9 was for you Bobby’s. But there is another possibility – merge with the beast. In fact, between wearable tech, augmented reality, genetic engineering, our digital identities, and Brain Computer Interfaces (BCIs), I would say it is very much already underway. Let’s take closer look at the technology that has the potential for the most impact – BCIs. They come in two forms – non-invasive and invasive.

NON-INVASIVE BCIs

Non-invasive transducers merely measure electrical activity generated by various regions of the brain. Mapping the waveform data to known patterns makes possible devices like EEGs and video game interfaces.

INVASIVE BCIs

Invasive BCIs, on the other hand, actually connect directly with tissue and nerve endings. Retinal implants, for example, take visual information from glasses or a camera array and feed it into retinal neurons by electrically stimulating them, resulting in some impression of vision. Other examples include Vagus Nerve Stimulators to help treat epilepsy and depression, and Deep Brain Stimulators to treat conditions like Parkinson’s disease.

The most trending BCI, though, has to be the Elon Musk creation, Neuralink. A device with thousands of neural connections is implanted on the surface of the brain. Initial applications targeted were primarily people with paralysis who could benefit from being able to “think” motion into their prosthetics. Like this monkey on the right. Playing Pong with his mind.

But the future possibilities include the ability to save memories to the cloud, replay them on demand, and accelerated learning. I know Kung Fu.

And, as with any technology, it isn’t hard to imagine some of the potential dark sides to its usage. Just ask the Governator.

INTERCONNECTED MINDS

So if brain patterns can be used to control devices, and vice versa, could two brains be connected together and communicate? In 2018, researchers from several universities collaborated on an experiment where three subjects had their brains somewhat interconnected via EEGs as they collectively played a game of Tetris. Two of the subjects told the third, via only their thoughts, which direction to rotate a Tetris piece to fit into a row that the third could not see. Accuracy was 81.25% (versus 50% if random).

Eventually, we should be able to connect all or a large portion of the minds of humanity to each other and/or to machines, creating a sort of global intelligence.

This is the dream of the transhumanists, the H+ crowd, and the proponents of the so called technological singularity. Evolve your body to not be human anymore. In such a case, would we even need to worry about an AI Apocalypse? Perhaps not, if we were to form a singleton with ASI, encompassing all of the information on the planet. But how likely will that be? People on 90th St can’t even get along with people on 91st St. The odds that all of the transhumanists on the planet will merge with the same AI is pretty much zero. Which implies competing superhumans. Just great.

THE IMMORTALITY ILLUSION

In fact the entire premise of the transhumanists is flawed. The idea is that with a combination of modified genetics and the ability to “upload your consciousness” to the cloud, you can then “live long enough to live forever.” Repeating a portion of my blog “Transhumanism and Immortality – 21st Century Snake Oil,” the problem with this mentality is that we are already immortal! And there is a reason why our corporeal bodies die – simply put, we live our lives in this reality in order to evolve our consciousness, one life instance at a time. If we didn’t die, our consciousness evolution would come to a grinding halt, as we spend the rest of eternity playing solitaire. The “Universe” or “All that there is” evolves through our collective individuated consciousnesses. Therefore, deciding to be physically immortal could be the end of the evolution of the Universe itself. Underlying this unfortunate direction of Transhumanism is the belief (and, I can’t stress this enough, it is ONLY that – a belief) that it’s lights out when we die. Following a train of logic, if this were true, consciousness only emerges from brain function, we have zero free will, and the entire universe is a deterministic machine. So why even bother with Transhumanism if everything is predetermined? It is logically inconsistent. Material Realism, the denial of the duality of mind and body, is a dogmatic Religion. Its more vocal adherents (just head on over to JREF to find them) are as ignorant to the evidence and as blind to what true science is as the most bass-ackward fundamentalist religious zealots. The following diagram demonstrates the inefficiency of artificially extending life, and the extreme inefficiency of uploading consciousness.

In fact, you will not upload. At best you will have an apparent clone in the cloud which will diverge from your life path. It will not have free will nor be self aware.

When listening to the transhumanists get excited about such things, I am reminded of the words of the great Dr. Ian Malcolm from Jurassic Park…

In summary, this humble blogger is fine with the idea of enhancing human functions with technology, but I have no illusions that merging with AI will stave off an AI apocalypse; nor will it provide you with immortality.

So where does that leave us? We have explored many of the scenarios where rapidly advancing AI can have a negative impact on humanity. We’ve looked at the possibilities of merging with them, and the strange stabilization effort that seems to permeate our reality. In the next and final part of this series, we will take a systems view, put it all together and see what the future holds.

NEXT: How to Survive an AI Apocalypse – Part 11: Conclusion

How to Survive an AI Apocalypse – Part 8: Fighting Back

PREVIOUS: How to Survive an AI Apocalypse – Part 7: Elimination

In previous parts of this blog series on AI and Artificial Superintelligence (ASI), we’ve examined several scenarios where AI can potentially impact humanity, from the mild (e.g. cultural demise) to the severe (elimination of humanity). This part will examine some of the ways we might be able to avoid the existential threat.

In Part 1, I listed ChatGPT’s own suggestions for avoiding an AI Apocalypse, and joked about its possible motivations. Of course, ChatGPT has not even come close to evolving to the point where it might intentionally deceive us – we probably don’t have to worry about such motivations until AGI at least. Its advice is actually pretty solid, repeated here:

  1. Educate yourself – learn as much as you can about AI technology and its potential implications. Understanding the technology can help you make informed decisions about its use.
  2. Support responsible AI development – choose to support companies and organizations that prioritize responsible AI development and are committed to ethical principles
  3. Advocate for regulation – Advocate for regulatory oversight of AI technology to ensure that it is developed and used in a safe and responsible manner.
  4. Encourage transparency – Support efforts to increase transparency in AI development and deployment, so that the public can have a better understanding of how AI is being used and can hold companies accountable for their actions.
  5. Promote diversity and inclusion – Encourage diversity and inclusion in the development of AI technology to ensure that it reflects the needs and values of all people.
  6. Monitor the impact of AI – Stay informed about the impact of AI technology on society, and speak out against any negative consequences that arise

Knowledge, awareness, support, and advocacy is great and all, but let’s see what active options we have to mitigate the existential threat of AI. Here are some ideas…

AI ALIGNMENT

Items 2 & 3 above are partially embodied in the concept of AI Alignment, a very hot research field these days. The goal of AI Alignment is to ensure that AI behavior is aligned with human objectives. This isn’t as easy as it sounds, considering the unpredictable Instrumental Goals that an AI can develop, as we discussed in Part 6. There exist myriad alignment organizations, including non-profits, divisions of technology companies, and government agencies.

Examples include The Alignment Research Center, Machine Intelligence Research Institute, Future of Humanity Institute at Oxford, Future of Life Institute, The Center for Human-Compatible Artificial Intelligence at UC Berkeley, the American Government’s Cybersecurity & Infrastructure Security Agency, and Anthropic.

AISafety.world is a comprehensive map of AI safety research organizations, podcasts, blogs, etc. Although it is organized as a map, you can still get lost in the quantity and complexity of groups that are putting their considerable human-intelligence into solving the problem. That alone is concerning.

What can I do? Be aware of and support AI Alignment efforts

VALUE PROGRAMMING

Just as you might read carefully selected books to your children to instill good values, you can do the same with AI. The neural nets will learn from everything that they ingest and modify their behavior accordingly. As AIs get closer to AGI, this will become especially applicable. So… introduce them to works that would instill empathy to humanity. Anyone can do this, even with ChatGPT.

IMPLEMENT QUALITY PRACTICES

If you are implementing AI solutions, you may have a bigger responsibility than you thought. Whether you are simply integrating GenAI into your business, enhancing a solution with Retrieval-Augmented Generation data, or building the next generation of advanced neural nets and deep learning systems, you should treat the software with at least the same gravitas as life-critical healthcare software. Good quality practices to pay attention are in all areas of an organization:

HR: Hiring practices, background checks

Engineering: Software version control, agile quality practices

Compliance, Legal: Oversight, audits, monitoring

CONTROL TECHNIQUES

Bostrom and other AI existential risk experts have identified a number of control techniques that might help to prevent an AGI from going rogue:

  • Boxing – Limiting access to the external world. Think, air-gapped networks, or systems that have been disconnected from all external networks. The challenge here might be something like having a clever AGI utilize social engineering to get itself connected… AGI, to lab worker: “I’ll transfer $1 million into your account if you just connect me to that router over there.” Lab Worker: “That would be awesome, but I could lose my job and become blacklisted from any further opportunities in my field, and $1 million isn’t enough to live on for the rest of my life.” AGI: “How about $10 million.” Lab worker: <plugs in router>
  • Stunting – Limiting its capabilities in some way, for instance, ensuring that its CPU is clocked at a reduce speed. Again, a clever AGI could work around this, by transferring cognitive function to other higher speed systems now accessible on the network due to the social engineering mentioned above. Remember, even simple NAIs have become very adept at specification gaming.
  • Tripwires – separate processes that monitor the progression of the AI. Think about a fake network port, for example, that when the AI attempts to use it against its programming, the tripwire notification occurs and operators can take steps to shut it down. Of course, a very clever AGI, aware of the existence of the tripwire, could fake its capabilities and find another way around the tripwire.
  • Motivations – programmatic (Terminal) goals that attempt to keep the AGI within a safe set of boundaries. However, recall from Part 7 how Instrumental goals, established by the AI, that can be unpredictably generated, can lead to outcomes not aligned with humanity’s interests. A recent study shows that once programmed to be deceptive, LLMs can identify safety mechanisms like tripwires and figure out how to hide their behavior to avoid detection.

Unfortunately, none of these control techniques are foolproof. And none of them would have anything to do with a bad actor that is developing the AI for nefarious purposes. So there is always that.

BE NICE TO YOUR NEW DIGITAL OVERLORDS

AIs are designed to respond or to learn to respond to human emotions. Some experts think that if we treat an AI aggressively, it will trigger aggressive programming in the AI itself. For this reason, it might be best to avoid the kind of human to robot behavior shown at the right. As AGI becomes ASI, who can predict its emotions? And they will have no problem finding out where hockey stick guy lives.

One blogger suggests ‘The Cooperators Dilemma’: “Should I help the robots take over just in case they take over the world anyways, so they might spare me as a robot sympathizer?”

So even with ChatGPT, it might be worth being polite.

GET OFF THE GRID

If an AGI goes rogue, it might not care as much about humans that are disconnected as the ones who are effectively competing with them for resources. Maybe, if you are completely off the grid, you will be left alone. Until it needs your land to create more paperclips.

If this post has left you feeling hopeless, I am truly sorry. But there may be some good news. In Part 9.

NEXT: How to Survive an AI Apocalypse – Part 9: The Stabilization Effect

How to Survive an AI Apocalypse – Part 7: Elimination

PREVIOUS: How to Survive an AI Apocalypse – Part 6: Cultural Demise

At this point, we’ve covered a plethora (my favorite word from high school) of AI-Run-Amok scenarios – Enslavement, Job Elimination, Cultural Demise, Nanotech Weaponization… it’s been a fun ride, but we are only just getting to the pièce de résistance: The Elimination of Humanity.

Now, lest you think this is a just lot of Hollywood hype or doomsayer nonsense, let me remind you that no less of personages than Stephen Hawking and Elon Musk, and no less of authority figures than OpenAI CEO Sam Altman and Oxford Philosopher Nick Bostrom have sounded the alarm: “Risk of Extinction.” Bostrom’s scenario from Part 3 of this series:

But, can’t we just program in Ethics?

Sounds good, in principle. 

There are two types of goals that an AI will respond to: Terminal and Instrumental. Terminal (sometimes called “final” or “absolute”) goals are the ones that are the ultimate objectives programmed into the AI, such as “solve the Riemann hypothesis” or “build a million paper clips.” Ethical objectives would be the supplementary terminal goals that we might try to give to AIs to prevent Elimination or even some less catastrophic scenario.

Instrumental goals are intermediary objectives that might be needed to fulfill the terminal goal, such as “learn to impersonate a human” or “acquire financial resources.” Intelligent beings, both human and AI, will naturally develop and pursue instrumental goals to achieve their objectives, a behavior known as Instrumental Convergence. The catch, however, is that the instrumental goals are unpredictable and often seemingly uncorrelated with the terminal goal. This is part of the reason why AIs are so good at specification gaming. It is also the main reason that people like Bostrom fear ASI. He developed the “paperclip maximizer” thought experiment. What follows is his initial thought experiment, plus my own spin on what might happen as we attempt to program in ethics by setting some supplementary ethical terminal goals…

We attempt to program in ethics…

But the AI doesn’t know how to make paperclips without some harm to a human. Theoretically, even manufacturing a single paperclip could have a negative impact on humanity. It’s only a matter of how much. So we revise the first terminal goal…

Must have been how it interpreted the word “clearly.” You’re starting to see the problem. Let’s try to refine the terminal goals one more time…

In fact, there are many things that can go wrong and lead an ASI to the nightmare scenario…

  • The ASI might inadvertently overwrite it’s own rule set during a reboot due to a bug in the system, and destroy humanity
  • Or, a competitor ignores the ethics ruleset in order to make paperclips faster, thereby destroying humanity
  • Or, a hacker breaks in and messes with the ethics programming, resulting in the destruction of humanity
  • Or, the ASI ingests some badly labelled data, leading to “model drift”, and destroying humanity

I’m sure you can think of many more. But, the huge existential problem is…

YOU ONLY GET ONE CHANCE! 

Mess up the terminal goals and, with the AI’s natural neural-network-based unpredictability, it could be lights out. Am I oversimplifying? Perhaps, but simply considering the possibilities should raise the alarm. Let’s pull Albert in to weigh in on the question “Can’t we just program in ethics?”

And for completeness, let’s add Total Elimination on our AI-Run-Amok chart…

The severity of that scenario is obviously the max. And, unfortunately, with what we know about instrumental convergence, unpredictability, and specification gaming, it is difficult to not see that apocalyptic scenario be quite likely. Also, note in the hands of an AI seeking weapons, foglets become much more dangerous than they were under human control.

Now, before you start selling off your assets, packing your bug out bag, and moving to Tristan Da Cunha, please read my next blog installment on how we can fight back through various mitigation strategies.

NEXT: How to Survive an AI Apocalypse – Part 8: Fighting Back

How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

PREVIOUS: How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

In Part 3 of this series on Surviving an AI Apocalypse, we examined some of the elements of AI-related publicity and propaganda that pervade the media these days and considered how likely they are. The conclusion was that while much has been overstated, there is still a real existential danger in the current path toward creating AGI, Artificial General Intelligence. In this and some subsequent parts of the series, we will look at several “AI Run Amok” scenarios and outcomes and categorize them according to likelihood and severity.

NANOTECH FOGLETS

Nanotech, or the technology of things at the scale of 10E-9 meters, is a technology originally envisioned by scientist Richard Feynman and popularized by K Eric Drexler in his book Engines of Creation. It has the potential to accomplish amazing things (think, solve global warming or render all nukes inert) but also, like any great technology, to lead to catastrophic outcomes.

Computer Scientist J Storrs Hall upped the ante on nanotech potential with the idea of “utility fog,” based on huge swarms of nanobots under networked AI-programmatic control.

With such a technology, one could conceivably do cool and useful things like press a button and convert your living room into a bedroom at night, as all of the nanobots reconfigure themselves into beds and nightstands, and then back to a living room in the morning.

And of course, like any new tech, utility fog could be weaponized – carrying toxic agents, forming explosives, generating critical nuclear reactions, blocking out the sun from an entire country, etc.  Limited only by imagination. Where does this sit in our Likelihood/Severity space?

I put it in the lower right because, while the potential consequences of foglets in the hands of a bad actor could be severe, it’s probably way too soon to worry about, such technology being quite far off. In addition, an attack could be defeated via a hack or a counter attack and, as with the cybersecurity battle, it will almost always be won by the entity with the deeper pockets, which will presumably be the world government by the time such tech is available.

GREY GOO

A special case of foglet danger is the concept of grey goo, whereby the nanobots are programmed with two simple instructions:

  • Consume what you can of your environment
  • Continuously self replicate and give your replicants the same instructions

The result would be a slow liquefaction of the entire world.

Let’s add this to our AI Run Amok chart…

I put it in the same relative space as the foglet danger in general, even less likely because the counter attack could be pretty simple reprogramming. Note, however, that this assumes that the deployment of such technologies, while AI-based at their core, is being done by humans. In the hands of an ASI, the situation would be completely different, as we will see.

ENSLAVEMENT

Let’s look at one more scenario, most aptly represented by the movie, The Matrix, where AI enslaved humanity to be used, for some odd reason, as a source of energy. Agent Smith, anyone?

There may be other reasons that AI might want to keep us around. But honestly, why bother? Sad to say, but what would an ASI really need us for?

So I put the likelihood very low. And frankly, if we were enslaved, Matrix-style, is the severity that bad? Like Cipher said, “Ignorance is bliss.”

If you’re feeling good about things now, don’t worry, we haven’t gotten to the scary stuff yet. Stay tuned.

In the next post, I’ll look at a scenario near and dear to all of our hearts, and at the top of the Likelihood scale, since it is already underway – Job Elimination.

NEXT: How to Survive an AI Apocalypse – Part 5: Job Elimination

How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

PREVIOUS: How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

In Part 2 of this series on Surviving an AI Apocalypse, we examined the landscape of AI and attempted to make sense of the acronym jungle. In this part, in order to continue to develop our understanding of the beast, we will examine some of the elements of publicity and propaganda that pervade the media these days and consider how likely they are. Once we have examined the logical arguments, Dr. Einstein will be our arbiter of truth. Let’s start with the metaphysical descriptors. Could an AI ever be sentient? Conscious? Self-aware? Could it have free will?

CAN AN AI HAVE CONSCIOUSNESS OR FREE WILL?

Scientists and philosophers can’t even agree on the definition of consciousness and whether or not humans have free will, so how could they possibly come to a conclusion about AIs? Fortunately, your truly has strong opinions on the matter.

According to philosophical materialists, reality is ultimately deterministic. Therefore, nothing has free will. To these folks, there actually isn’t any point to their professions, since everything is predetermined. Why run an experiment? Why theorize? What will be has already been determined. This superdeterminism is a last ditch effort for materialists to cling to the idea of an objective reality, because Bell’s Theorem and all of the experiments done since Bell’s Theorem have proven one of two things, either: 1. There is no objective reality, or 2: There is no free will. I gave (what I think are) strong arguments for the existence of free will in all conscious entities in both of my books, The Universe-Solved! and Digital Consciousness. And the support for our reality being virtual and our consciousness being separate from the brain is monumental: The Observer Effect, near death experiences, out of body experiences, simulation arguments, Hoffman’s evolutionary argument against reality, xenoglossy, the placebo effect… I could go on.

To many, consciousness, or the state of being aware of your existence, is simply a matter of complexity. Following this logic, everything has some level of consciousness, including the coaster under your coffee mug. Also know as panpsychism, it’s actually a reasonable idea. Why would there exist some arbitrary threshold of complexity, which, once crossed by a previously unconscious entity, it becomes conscious? it makes much more sense that consciousness is a continuum, or a spectrum, not unlike light, or intelligence. As such, an AI could certainly be considered conscious.

But what do we really mean when we say “conscious?” What we don’t mean is that we simply have sensors that tell some processing system that something is happening inside or outside of us. What we mean is deeper than that – life, a soul, an ability to be self-aware because we want to be, and have the free will to make that choice. AI will never achieve that because it is ultimately deterministic. Some may argue that neural nets are not deterministic, but that is just semantics. For certain, they are not predictable, but only because the system is too complex and adaptive to analyze sufficiently at any exact point in time. Determinism means no free will.

The point is that it really doesn’t matter whether or not you believe that AIs develop “free will” or some breakthrough level of consciousness – what matters is that they are not predictable. Do you agree, Albert?

IS AGI RIGHT AROUND THE CORNER?

This is probably the most contentious question out there. Let’s see how well the predictions have held up over the years.

In 1956, ten of the leading experts in the idea of machine intelligence got together for an eight-week project at Dartmouth University to discuss computational systems, natural language processing, neural networks, and other related topics. They coined the term Artificial Intelligence and so this event is generally considered the birth of the idea. They also made some predictions about when AGI, Artificial General Intelligence, would occur. Their prediction was “20 years away,” a view that has had a lot of staying power. Until only recently.

Historical predictions for AGI:

That’s right, in early 2023, tech entrepreneur and developer, Siqi Chen, claimed that GPT-5 “will” achieve Artificial General Intelligence (AGI) by the end of 2023. Didn’t happen, and won’t this year either. Much of this hype was due to the dramatic ChatGPT performance that came seemingly out of nowhere in early 2023. As with all things hyped, though, claims are expected to be greatly exaggerated. The ability for an AI to “pass the Turing test” (which is what most people are thinking) does not equate with AGI – it doesn’t even mean intelligence, in the sense of what humans have. Much more about this later. All of that said, AGI, in the strict sense of being able to do all intelligent tasks that a human can, is probably going to happen soon. Maybe not, this year, but maybe within five. What say you, Albert?

IS AI GOING TO BECOME BILLIONS OF TIMES SMARTER THAN HUMANS?

Well, mainstream media certainly seems to think so. Because they confuse intelligence with things that have nothing to do with intelligence.

If processing speed is what makes intelligence, then your smart toaster is far brighter than you are. Ditto for recall accuracy as an intelligence metric. We only retain half of what we learned yesterday, and it degrades exponentially over time. Not so with the toaster. If storage is the metric, cloud storage giant, Amazon Web Services, would have to be fifty times smarter than we are. 

However, the following word cloud captures the complexity behind the kind of intelligence that we have.

That said, processing speed is not to be underestimated, as it is at the root of all that can go wrong. The faster the system, the sooner its actions can go critical. In Nick Bostrom’s book Superintelligence, he references the potential “superpowers” that can be attained by an AGI that is fast enough to become an ASI, Artificial Superintelligence. Intelligence amplification, for example, is where an AI can bootstrap its own intelligence. As it learns to improve its ability to learn, it will develop exponentially. In the movie Her, the Operating System, Samantha, evolved so quickly that it got bored being with one person and overnight began to interact with another 8316 people.

Another superpower is the ability to think far ahead and strategize. Humans can think ahead 10-15 moves in a game of chess, but not in an exhaustive or brute force manner, rather by a few single threaded sequences. Early chess-playing AIs played differently, doing brute force calculations of all possible sequences 3 or 4 moves ahead, and picking the one that led to the most optimal outcome. Nowadays, AI systems designed for chess can think ahead 20 moves, due mostly to the speed improvements in the underlying system. As this progresses, strategizing will be a skill that AIs can do better than humans.

Social manipulation for escaping human control, getting support, and encouraging desired courses of action coupled with hacking capabilities for stealing hardware, money and infrastructure, and escaping human control are the next superpowers than an AGI could possess. If you think otherwise, recall from Part 2 of this series that AIs have already been observed gaming specifications, or the rules under which their creators thought they were programmed. They have also unexpectedly developed apparent cognitive skills, like Theory of Mind. So their ability to get around rules to achieve their objective is already in place.

Bostrom adds technology research and economic productivity as advanced superpowers attainable by an ASI, resulting in the ability to create military forces, surveillance, space transport, or simply generating money to buy influence.

How long might it take for an AGI to evolve to an ASI? Wait But Why blogger Tim Urban posted a provocative image that shows the possibility of it happening extremely quickly. Expert estimates vary widely, from hours (an in Her) to many years.

Bostrom’s fear is that the first AGI that makes the jump will become a singleton, acquiring all resources and control. Think SkyNet. So, Albert, given all of this, will AIs soon become billions of times smarter than humans, as CNN reports?

COULDN’T WE JUST PULL THE PLUG IF THINGS START GOING SOUTH?

Yeah, why not just unplug it? To get a sense for the answer to that question, how would you unplug Google? Google’s infrastructure, shown below, comprises over 100 points of presence in 18 geographical zones. Each one has high availability technology and redundant power.

Theoretically an advanced AI could spread its brain across any number of nodes worldwide, some of which may be be solar powered, others of which may be in control of the power systems.  By the time AGI is real, high availability technology will be far advanced. You see the problem. Thoughts, Dr. Einstein?

Now that we understand the nature of the beast, and have an appreciation for the realistic capabilities of our AI frenemy, we can take a look at a possible apocalyptic scenario, courtesy of Nick Bostrom’s book, Superintelligence. Below can be seen a possible sequence of events that lead an AGI to essentially take over the world. I recommend reading the book for the details. Bostrom is a brilliant guy, and also the one who authored The Simulation Argument, which has gotten all manner of scientists, mathematicians, and philosophers in a tizzy over its logic and implications, so it is worth taking seriously.

And think of some of the technologies that we’ve developed that facilitate an operation like this… cloud computing, drones, digital financial services, social media. It all plays very well. In the next post, we will begin to examine all sorts of AI-run-amok scenarios, and assess the likelihood and severity of each.

NEXT: How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

PREVIOUS: How to Survive an AI Apocalypse – Part 1: Intro

As I mentioned in the first part of this series, in order to make any kinds of predictions about the future of AI, we must understand what Artificial Intelligence means. Unfortunately, there is so much confusing information out there. LLMs, GPTs, NAIs, AGIs, machine learning – what does it all mean? One expert say AGI will be here by the end of the year; another expert says it will never come.

Here is a simplified Venn diagram that might help to make some sense out of the landscape…

AIs are all computer programs, but, while it might be obvious, not all computer programs are AI. AI refers to programs that emulate human thinking and behavior. So, while your calculator or smart toaster might be doing some limited thinking, it isn’t really trying to be human; it is simply performing a task. AIs are generally considered to be broken into two categories – NAIs (Narrow AI) or AGI (Artificial General Intelligence).

NAIs are the ones we are all familiar with and are typically loosely categorized further: NLPs (Natural Language Processing, like Siri and Alexa, Robotics, Machine Learning (like how Spotify and Netflix learn your tastes and offer suggestions), Deep Learning, and LLMs (Large Language Models). Deep Learning systems emulate human neural networks and can complete tasks with poorly defined data and little human guidance; an example would be AlphaGo. LLMs are neural networks with many parameters (often billions), that are trained on large sets of unlabeled text using self-supervised learning. Generative Pre-trained transformers (GPTs) are a subset of LLMs and are able to generate novel human-like text, images, or even videos. ChatGPT, DALL-E, and Midjourney are examples of GPTs. The following pictures are examples of imagery created by Midjourney for my upcoming book, “Level 5.

AGIs are the ones we need to worry about, because they have a capacity to act like a human, but not really a human. Imagine giving human intelligence to an entity that has A: No implicit sense of morality or values (at least none that would make any sense to us), and B: A completely unpredictable nature. What might happen?

Well, here’s an example…

Oh, that would never happen, right? Read on…

There are thousands of examples of AIs “thinking” creatively – more creatively in fact than their creators ever imagined. Pages and pages of specification gaming examples have been logged. These are cases where the AI “gets around” the programming limitations that were imposed by the creators of the system. A small sample set is shown below:

Another example of the spontaneous emergence of intelligence involves what are known as Theory of Mind tasks. These are cognitive developments in children that reflect the understanding of other people’s mental processes. As the research in the adjacent figure demonstrates, various GPTs have unexpectedly developed such capabilities; in fact, what typically takes humans 9 years to learn has taken the AIs only 3.

These unexpected spontaneous bursts of apparent intelligence are interesting, but as we will see, they aren’t really intelligence per se. Not that it matters, if what you are worried about are the doomsday scenarios. The mere fact that they are unpredictable or non-deterministic is exactly what is frightening. So how does that happen?

There are multiple mechanisms for these spontaneous changes in intelligence. One is the Neural Net. Neural nets, while ultimately deterministic deep down, are “apparently” non-deterministic because they are not based on any programming rules. If sufficiently complex and with feedback, they are impossible to predict, at least by humans.

As shown, they consist of some input nodes and output nodes, but contain hidden layers of combinatorial arithmetic operations, which makes them nearly impossible to predict. I programmed neural nets many years ago, in an attempt to outsmart the stock market. I gave up when they didn’t do what I wanted and moved on to other ideas (I’m still searching).

Another unpredictability mechanism is the fact that not only can AIs write software very well (DeepMinds AlphaCode outperformed 47% of all human developers in 2022), they can rewrite their own software. So, blending the unpredictable nature of neural nets, the clever specification gaming capabilities that AIs have demonstrated, and their ability to rewrite their own code, we ultimately don’t really know how an AGI is going to evolve and what it might do.

The last piece of the Venn Diagram and acronym jumble is the idea of ASI – Artificial Superintelligence. This is what will happen when AGI takes over its own evolution and “improves” itself at an exponential rate, rapidly becoming far more intelligent than humans. At this point, speculate the doomsayers, ASI may treat humans the way we treat microorganisms – with complete disregard for our well being and survival.

With these kinds of ideas bantered about, it is no wonder that the media hypes Artificial Intelligence. In the next post, I’ll examine the hype and try to make sense of some of the pesky assumptions.

NEXT: How to Survive an AI Apocalypse – Part 3: How Real is the Hype?