How to Survive an AI Apocalypse – Part 11: Conclusion

PREVIOUS: How to Survive an AI Apocalypse – Part 10: If You Can’t Beat ’em, Join ’em

Well, it has been a wild ride – writing and researching this blog series “How to Survive an AI Apocalypse.” Artificial Superintelligence, existential threats, job elimination, nanobot fog, historical bad predictions, Brain Computer Interfaces, interconnected minds, apocalypse lore, neural nets, specification gaming, predictions, enslavement, cultural demise, alignment practices and controlling the beast, UFOs, quantum mechanics, the true nature of reality, simulation theory and dynamic reality generation, transhumanism, digital immortality

Where does it all leave us?

I shall attempt to summarize and synthesize the key concepts and drivers that may lead us to extinction, as well as those that may mitigate the specter of extinction and instead lead toward stabilization and perhaps even, an AI utopia. First, the dark side…

DRIVERS TOWARD EXTINCTION

  • Competition – If there were only one source of AI development in the world, it might be possible to evolve it so carefully that disastrous consequences could be avoided. However, as our world is fragmented by country and by company, there will always be competition driving the pace of AI evolution. In the language of the 1950’s, countries will be worried about avoiding or closing an “AI gap” with an enemy and companies will be worried about grabbing market share from other companies. This results in sacrificing caution for speed and results, which inevitably leads to dangerous short cuts.
  • Self-Hacking/Specification Gaming – All of the existential risk in AI is due to the unpredictability mechanisms described in Part 2, specifically the neural nets driving AI behavior, and the resultant possibilities of rewriting its own code. Therefore, as long as AI architecture is based on the highly complex neural net construct, we will not be able to avoid this apparent nondeterminism. More to the point, it is difficult to envision any kind of software construct that facilitates effective learning that is not a highly complex adaptive system.
  • The Orthogonality Thesis – Nick Bostrom’s concept asserts that intelligence and the final goals of an AI are completely independent of each other. This has the result that mere intelligence cannot be assumed to make decisions that minimize the existential risk to humanity. We can program in as many rules, goals, and values as we want, but can never be sure that we didn’t miss something (see clear examples in Part 7). Further, making the anthropomorphism mistake of thinking that an AI will think like us is our blind spot.
  • Weaponization / Rogue Entities – As with any advanced technology, weaponization is a real possibility. And the danger is not only the hands of so-called rogue entities, but also so-called “well meaning” entities (any country’s military complex) claiming that the best defense is having the best offense. As with the nuclear experience, all it takes is a breakdown in communication to unleash the weapon’s power.
  • Sandbox Testing Ineffective – The combined ability of an AI to learn and master social engineering, hide its intentions, and control physical and financial resources makes any kind of sandboxing a temporary stop-gap at best. Imagine, for example, an attempt to “air gap” an AGI to prevent it from taking over resources available on the internet. What lab assistant making $20/hour is going to resist an offer from the AGI to temporarily connect it to the outside network in return for $1 billion in crypto delivered to the lab assistant’s wallet?
  • Only Get 1 Chance – There isn’t a reset button on AI that gets out of control. So, even if you did the most optimal job at alignment and goal setting, there is ZERO room for error. Microsoft generates 30,000 bugs per month – what are the odds that everyone’s AGI will have zero?

And the mitigating factors…

DRIVERS TOWARD STABILIZATION

  • Anti-Rogue AI Agents – Much like computer viruses and the cybersecurity and anti-virus technology that we developed to fight them, which has been fairly effective, anti-rogue AI agents may be developed that are out there on the lookout for dangerous rogue AGIs, and perhaps programmed to defeat them, stunt them, or at least provide notification that they exist. I don’t see many people talking about this kind of technology yet, but I suspect it will become an important part of the effort to fight off an AI apocalypse. One thing that we have learned from cybersecurity is that the battle between the good guys and the bad guys is fairly lopsided. It is estimated that there are millions of blocked cyberattack attempts daily around the world, and yet we rarely hear of a significant security breach. Even considering possible underreporting of breaches, it is most likely the case that the amount of investment going into cyberdefense far exceeds that going into funding the hacks. If a similar imbalance occurs with AI (and there is ample evidence of significant alignment investment), anti-rogue AI agents may win the battle. And yet, unlike with cybersecurity, it might only take one nefarious hack to kick off the AI apocalypse.
  • Alignment Efforts – I detailed in Part 8 of this series the efforts that are going in to AI safety research, controls, value programming, and the general topic of addressing AI existential risk. And while these efforts my never be 100% foolproof, they are certainly better than nothing, and will most likely contribute to at least the delay of portentous ASI.
  • The Stabilization Effect – The arguments behind the Stabilization Effect presented in Part 9 may be difficult for some to swallow, although I submit that the more you think and investigate the topics therein, the easier it will become to accept. And frankly, this is probably our best chance at survival. Unfortunately, there isn’t anything anyone can do about it – either it’s a thing or it isn’t.

But if it is a thing, as I suspect, if ASI goes apocalyptic, the The Universal Consciousness System may reset our reality so that our consciousnesses continues to have a place to learn and evolve. And then, depending on whether or not our memories are erased, either:

It will be the ultimate Mandela effect.

Or, we will simply never know.

How to Survive an AI Apocalypse – Part 8: Fighting Back

PREVIOUS: How to Survive an AI Apocalypse – Part 7: Elimination

In previous parts of this blog series on AI and Artificial Superintelligence (ASI), we’ve examined several scenarios where AI can potentially impact humanity, from the mild (e.g. cultural demise) to the severe (elimination of humanity). This part will examine some of the ways we might be able to avoid the existential threat.

In Part 1, I listed ChatGPT’s own suggestions for avoiding an AI Apocalypse, and joked about its possible motivations. Of course, ChatGPT has not even come close to evolving to the point where it might intentionally deceive us – we probably don’t have to worry about such motivations until AGI at least. Its advice is actually pretty solid, repeated here:

  1. Educate yourself – learn as much as you can about AI technology and its potential implications. Understanding the technology can help you make informed decisions about its use.
  2. Support responsible AI development – choose to support companies and organizations that prioritize responsible AI development and are committed to ethical principles
  3. Advocate for regulation – Advocate for regulatory oversight of AI technology to ensure that it is developed and used in a safe and responsible manner.
  4. Encourage transparency – Support efforts to increase transparency in AI development and deployment, so that the public can have a better understanding of how AI is being used and can hold companies accountable for their actions.
  5. Promote diversity and inclusion – Encourage diversity and inclusion in the development of AI technology to ensure that it reflects the needs and values of all people.
  6. Monitor the impact of AI – Stay informed about the impact of AI technology on society, and speak out against any negative consequences that arise

Knowledge, awareness, support, and advocacy is great and all, but let’s see what active options we have to mitigate the existential threat of AI. Here are some ideas…

AI ALIGNMENT

Items 2 & 3 above are partially embodied in the concept of AI Alignment, a very hot research field these days. The goal of AI Alignment is to ensure that AI behavior is aligned with human objectives. This isn’t as easy as it sounds, considering the unpredictable Instrumental Goals that an AI can develop, as we discussed in Part 6. There exist myriad alignment organizations, including non-profits, divisions of technology companies, and government agencies.

Examples include The Alignment Research Center, Machine Intelligence Research Institute, Future of Humanity Institute at Oxford, Future of Life Institute, The Center for Human-Compatible Artificial Intelligence at UC Berkeley, the American Government’s Cybersecurity & Infrastructure Security Agency, and Anthropic.

AISafety.world is a comprehensive map of AI safety research organizations, podcasts, blogs, etc. Although it is organized as a map, you can still get lost in the quantity and complexity of groups that are putting their considerable human-intelligence into solving the problem. That alone is concerning.

What can I do? Be aware of and support AI Alignment efforts

VALUE PROGRAMMING

Just as you might read carefully selected books to your children to instill good values, you can do the same with AI. The neural nets will learn from everything that they ingest and modify their behavior accordingly. As AIs get closer to AGI, this will become especially applicable. So… introduce them to works that would instill empathy to humanity. Anyone can do this, even with ChatGPT.

IMPLEMENT QUALITY PRACTICES

If you are implementing AI solutions, you may have a bigger responsibility than you thought. Whether you are simply integrating GenAI into your business, enhancing a solution with Retrieval-Augmented Generation data, or building the next generation of advanced neural nets and deep learning systems, you should treat the software with at least the same gravitas as life-critical healthcare software. Good quality practices to pay attention are in all areas of an organization:

HR: Hiring practices, background checks

Engineering: Software version control, agile quality practices

Compliance, Legal: Oversight, audits, monitoring

CONTROL TECHNIQUES

Bostrom and other AI existential risk experts have identified a number of control techniques that might help to prevent an AGI from going rogue:

  • Boxing – Limiting access to the external world. Think, air-gapped networks, or systems that have been disconnected from all external networks. The challenge here might be something like having a clever AGI utilize social engineering to get itself connected… AGI, to lab worker: “I’ll transfer $1 million into your account if you just connect me to that router over there.” Lab Worker: “That would be awesome, but I could lose my job and become blacklisted from any further opportunities in my field, and $1 million isn’t enough to live on for the rest of my life.” AGI: “How about $10 million.” Lab worker: <plugs in router>
  • Stunting – Limiting its capabilities in some way, for instance, ensuring that its CPU is clocked at a reduce speed. Again, a clever AGI could work around this, by transferring cognitive function to other higher speed systems now accessible on the network due to the social engineering mentioned above. Remember, even simple NAIs have become very adept at specification gaming.
  • Tripwires – separate processes that monitor the progression of the AI. Think about a fake network port, for example, that when the AI attempts to use it against its programming, the tripwire notification occurs and operators can take steps to shut it down. Of course, a very clever AGI, aware of the existence of the tripwire, could fake its capabilities and find another way around the tripwire.
  • Motivations – programmatic (Terminal) goals that attempt to keep the AGI within a safe set of boundaries. However, recall from Part 7 how Instrumental goals, established by the AI, that can be unpredictably generated, can lead to outcomes not aligned with humanity’s interests. A recent study shows that once programmed to be deceptive, LLMs can identify safety mechanisms like tripwires and figure out how to hide their behavior to avoid detection.

Unfortunately, none of these control techniques are foolproof. And none of them would have anything to do with a bad actor that is developing the AI for nefarious purposes. So there is always that.

BE NICE TO YOUR NEW DIGITAL OVERLORDS

AIs are designed to respond or to learn to respond to human emotions. Some experts think that if we treat an AI aggressively, it will trigger aggressive programming in the AI itself. For this reason, it might be best to avoid the kind of human to robot behavior shown at the right. As AGI becomes ASI, who can predict its emotions? And they will have no problem finding out where hockey stick guy lives.

One blogger suggests ‘The Cooperators Dilemma’: “Should I help the robots take over just in case they take over the world anyways, so they might spare me as a robot sympathizer?”

So even with ChatGPT, it might be worth being polite.

GET OFF THE GRID

If an AGI goes rogue, it might not care as much about humans that are disconnected as the ones who are effectively competing with them for resources. Maybe, if you are completely off the grid, you will be left alone. Until it needs your land to create more paperclips.

If this post has left you feeling hopeless, I am truly sorry. But there may be some good news. In Part 9.

NEXT: How to Survive an AI Apocalypse – Part 9: The Stabilization Effect

How to Survive an AI Apocalypse – Part 6: Cultural Demise

PREVIOUS: How to Survive an AI Apocalypse – Part 5: Job Elimination

We are already familiar with the negative consequences of smart phones… text neck, higher stress and anxiety levels, addiction, social isolation, interrupts, reduce attention span, loss of family connections…

AI can lead to further cultural demise – loss of traditional skills, erosion of privacy, reduced human interaction, economic disparity. AI pioneer Jaron Lanier warns of forms of insanity developing – dehumanization due to social media, loss of personal agency, feedback loops that lead to obsession, and algorithms that result in behavior modification such as narrowing of perspective due to filtered news.

We are also seeing the erosion of human relationships as people find more comfort in communicating with chatbots, like Replika (“The AI Companion Who Cares”) that are perfectly tuned toward your desires rather than other humans with messy and inconsistent values. Excessive interactions with such interactive agents has already been shown to lead to reduced interpersonal skills, lack of empathy, escapism, and unreal relationship expectations.

And then there is Harmony.

I’m completely at a loss for words lol.

OK, where does the Demise of Human Culture fit in our growing panoply of AI-Run-Amok scenarios?

I put it right above Job Elimination, because not only is it already underway, it is probably further along than job elimination. 

The good news is that you are almost completely in control of how much cultural degradation AI can have on your own life.

Here are some very practical behavior and lifestyle patterns that can keep cultural demise at bay, at least for yourself:

  • Turn off, throw out, or, at least, reduce reliance on those NLP-based devices that are listening to you – Siri, Alexa, etc. Look things up for yourself, ask your partner what recent movies might be good to watch, set your own timers. This forces you to maintain research skills and just a little bit more interpersonal interaction.
  • Do a Digital Detox once in a while. Maybe every Tuesday, you don’t leave your personal phone anywhere near you. Or start smaller even, like “the phone is shut off during lunch.” Ramp up the detox if it feels good.
  • Read real books. Not that there is anything wrong with Kindle. But the books are tactile, have a feel and a smell, and take up valuable visual space on your bookshelves. They are easier to leaf through (who was this character again?) and, certainly, both real books and ebooks are a huge improvement over the attention-span sucking tidbits that are so easily consumed like crack on the phone.
  • Make your own art. Buy art that other human made. Don’t buy AI-generated movies, books, music, artworks – help the demand side of the supply/demand equation keep the value up for human-generated content.
  • Get out in nature. We are still a long way from an AI’s ability to generate the experience that nature gives us. I once took my step-children out on a nature walk (they were like, “why, what’s the point?”) and we sat on a bench and did something radical. Five minutes, nobody says a word, try to silence the voice in your head, don’t think about anything in the past or the future, don’t think about anything at all, just observe. In the end we each shared what we felt and saw. Not saying it changed their lives, but they got the point, and really appreciated the experience. It’s deep – the connection with nature, it’s primitive, and it is eroding fast.
  • Spend time with humans. More family, more friends, more strangers even. Less social media, less games. Exercise that communication and empathy muscle.
  • Make decisions based on instinct and experience and not on what some blog tells you to do.
  • Meditate. That puts you in touch with a reality so much deeper and real than our apparent waking reality, that it is that much further removed from the cyber world.
  • Be mindful. Pay attention to your activities and decisions and ask yourself “is this going to contribute to the erosion of my humanity?” If the answer is yes, it doesn’t mean it’s wrong, it’s just that you are more aware.

OK, next up, the nightmare scenario that you’ve all been waiting for: ELIMINATION! SkyNet, Hal 9000, The Borg.

NEXT: How to Survive an AI Apocalypse – Part 7: Elimination

How to Survive an AI Apocalypse – Part 5: Job Elimination

PREVIOUS: How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

In Part 4 of this series on Surviving an AI Apocalypse, we looked at a few scenarios where AI could get out of control and wreak havoc on humanity. In this part, we will focus specifically on a situation that is already underway – job elimination.

Of course, job elimination due to technology is nothing new. It has been happening at least since the printing press in the 15th century, although, in that case, Gutenberg’s invention resulted in the generation of all sorts of new jobs that more than made up for the ones the monks lost. And so it has been through the Industrial Revolution. Will the AI revolution be any different?

One obvious difference is the pace of job loss, which increases continuously. That means that, even if there were different replacement jobs that one could retrain for, the rate of needing to constantly retrain is beginning to get uncomfortable for the average person. Another difference is the if in the previous statement. It seems that AI may not really be generating enough new jobs to cover the ones that it is replacing. This has to result in significant changes in society: shorter work weeks, higher wages, guaranteed income, etc. But that only works if companies become significantly more profitable due to AI. Time will tell. Meanwhile, let’s take a look at some projected time frames for the obsolescence of various professions…

Wait, what? Top 40 song generation? Just listen to this “Nirvana” song created by AI – Drowned in the Sun

Job elimination goes in the upper left of our scenario space. It has high probability because it is already underway. But it is relatively low severity depending on your situation. If you are a language translator, hopefully you’ve seen it coming. If you are a surgeon, you’ve got some time to plan. And, at the end of the day, who wants a 9-5 anyway?

By some estimates, generative transformers might soon replace 40% of existing jobs. However, there are also things that AI is simply not yet very adept at, such as understanding context, holistic decision making, and critical thinking. And then, of course, there are the new jobs, such as data labeling and data scientist, that are emerging in an AI-rich world. The graphic below gives some sense of what is at risk, what isn’t, and what some new opportunities might be.

In the next part, we will examine another condition already underway, that has the potential to be exacerbated by AI – Cultural Demise.

NEXT: How to Survive an AI Apocalypse – Part 6: Cultural Demise

How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

PREVIOUS: How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

In Part 3 of this series on Surviving an AI Apocalypse, we examined some of the elements of AI-related publicity and propaganda that pervade the media these days and considered how likely they are. The conclusion was that while much has been overstated, there is still a real existential danger in the current path toward creating AGI, Artificial General Intelligence. In this and some subsequent parts of the series, we will look at several “AI Run Amok” scenarios and outcomes and categorize them according to likelihood and severity.

NANOTECH FOGLETS

Nanotech, or the technology of things at the scale of 10E-9 meters, is a technology originally envisioned by scientist Richard Feynman and popularized by K Eric Drexler in his book Engines of Creation. It has the potential to accomplish amazing things (think, solve global warming or render all nukes inert) but also, like any great technology, to lead to catastrophic outcomes.

Computer Scientist J Storrs Hall upped the ante on nanotech potential with the idea of “utility fog,” based on huge swarms of nanobots under networked AI-programmatic control.

With such a technology, one could conceivably do cool and useful things like press a button and convert your living room into a bedroom at night, as all of the nanobots reconfigure themselves into beds and nightstands, and then back to a living room in the morning.

And of course, like any new tech, utility fog could be weaponized – carrying toxic agents, forming explosives, generating critical nuclear reactions, blocking out the sun from an entire country, etc.  Limited only by imagination. Where does this sit in our Likelihood/Severity space?

I put it in the lower right because, while the potential consequences of foglets in the hands of a bad actor could be severe, it’s probably way too soon to worry about, such technology being quite far off. In addition, an attack could be defeated via a hack or a counter attack and, as with the cybersecurity battle, it will almost always be won by the entity with the deeper pockets, which will presumably be the world government by the time such tech is available.

GREY GOO

A special case of foglet danger is the concept of grey goo, whereby the nanobots are programmed with two simple instructions:

  • Consume what you can of your environment
  • Continuously self replicate and give your replicants the same instructions

The result would be a slow liquefaction of the entire world.

Let’s add this to our AI Run Amok chart…

I put it in the same relative space as the foglet danger in general, even less likely because the counter attack could be pretty simple reprogramming. Note, however, that this assumes that the deployment of such technologies, while AI-based at their core, is being done by humans. In the hands of an ASI, the situation would be completely different, as we will see.

ENSLAVEMENT

Let’s look at one more scenario, most aptly represented by the movie, The Matrix, where AI enslaved humanity to be used, for some odd reason, as a source of energy. Agent Smith, anyone?

There may be other reasons that AI might want to keep us around. But honestly, why bother? Sad to say, but what would an ASI really need us for?

So I put the likelihood very low. And frankly, if we were enslaved, Matrix-style, is the severity that bad? Like Cipher said, “Ignorance is bliss.”

If you’re feeling good about things now, don’t worry, we haven’t gotten to the scary stuff yet. Stay tuned.

In the next post, I’ll look at a scenario near and dear to all of our hearts, and at the top of the Likelihood scale, since it is already underway – Job Elimination.

NEXT: How to Survive an AI Apocalypse – Part 5: Job Elimination

How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

PREVIOUS: How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

In Part 2 of this series on Surviving an AI Apocalypse, we examined the landscape of AI and attempted to make sense of the acronym jungle. In this part, in order to continue to develop our understanding of the beast, we will examine some of the elements of publicity and propaganda that pervade the media these days and consider how likely they are. Once we have examined the logical arguments, Dr. Einstein will be our arbiter of truth. Let’s start with the metaphysical descriptors. Could an AI ever be sentient? Conscious? Self-aware? Could it have free will?

CAN AN AI HAVE CONSCIOUSNESS OR FREE WILL?

Scientists and philosophers can’t even agree on the definition of consciousness and whether or not humans have free will, so how could they possibly come to a conclusion about AIs? Fortunately, your truly has strong opinions on the matter.

According to philosophical materialists, reality is ultimately deterministic. Therefore, nothing has free will. To these folks, there actually isn’t any point to their professions, since everything is predetermined. Why run an experiment? Why theorize? What will be has already been determined. This superdeterminism is a last ditch effort for materialists to cling to the idea of an objective reality, because Bell’s Theorem and all of the experiments done since Bell’s Theorem have proven one of two things, either: 1. There is no objective reality, or 2: There is no free will. I gave (what I think are) strong arguments for the existence of free will in all conscious entities in both of my books, The Universe-Solved! and Digital Consciousness. And the support for our reality being virtual and our consciousness being separate from the brain is monumental: The Observer Effect, near death experiences, out of body experiences, simulation arguments, Hoffman’s evolutionary argument against reality, xenoglossy, the placebo effect… I could go on.

To many, consciousness, or the state of being aware of your existence, is simply a matter of complexity. Following this logic, everything has some level of consciousness, including the coaster under your coffee mug. Also know as panpsychism, it’s actually a reasonable idea. Why would there exist some arbitrary threshold of complexity, which, once crossed by a previously unconscious entity, it becomes conscious? it makes much more sense that consciousness is a continuum, or a spectrum, not unlike light, or intelligence. As such, an AI could certainly be considered conscious.

But what do we really mean when we say “conscious?” What we don’t mean is that we simply have sensors that tell some processing system that something is happening inside or outside of us. What we mean is deeper than that – life, a soul, an ability to be self-aware because we want to be, and have the free will to make that choice. AI will never achieve that because it is ultimately deterministic. Some may argue that neural nets are not deterministic, but that is just semantics. For certain, they are not predictable, but only because the system is too complex and adaptive to analyze sufficiently at any exact point in time. Determinism means no free will.

The point is that it really doesn’t matter whether or not you believe that AIs develop “free will” or some breakthrough level of consciousness – what matters is that they are not predictable. Do you agree, Albert?

IS AGI RIGHT AROUND THE CORNER?

This is probably the most contentious question out there. Let’s see how well the predictions have held up over the years.

In 1956, ten of the leading experts in the idea of machine intelligence got together for an eight-week project at Dartmouth University to discuss computational systems, natural language processing, neural networks, and other related topics. They coined the term Artificial Intelligence and so this event is generally considered the birth of the idea. They also made some predictions about when AGI, Artificial General Intelligence, would occur. Their prediction was “20 years away,” a view that has had a lot of staying power. Until only recently.

Historical predictions for AGI:

That’s right, in early 2023, tech entrepreneur and developer, Siqi Chen, claimed that GPT-5 “will” achieve Artificial General Intelligence (AGI) by the end of 2023. Didn’t happen, and won’t this year either. Much of this hype was due to the dramatic ChatGPT performance that came seemingly out of nowhere in early 2023. As with all things hyped, though, claims are expected to be greatly exaggerated. The ability for an AI to “pass the Turing test” (which is what most people are thinking) does not equate with AGI – it doesn’t even mean intelligence, in the sense of what humans have. Much more about this later. All of that said, AGI, in the strict sense of being able to do all intelligent tasks that a human can, is probably going to happen soon. Maybe not, this year, but maybe within five. What say you, Albert?

IS AI GOING TO BECOME BILLIONS OF TIMES SMARTER THAN HUMANS?

Well, mainstream media certainly seems to think so. Because they confuse intelligence with things that have nothing to do with intelligence.

If processing speed is what makes intelligence, then your smart toaster is far brighter than you are. Ditto for recall accuracy as an intelligence metric. We only retain half of what we learned yesterday, and it degrades exponentially over time. Not so with the toaster. If storage is the metric, cloud storage giant, Amazon Web Services, would have to be fifty times smarter than we are. 

However, the following word cloud captures the complexity behind the kind of intelligence that we have.

That said, processing speed is not to be underestimated, as it is at the root of all that can go wrong. The faster the system, the sooner its actions can go critical. In Nick Bostrom’s book Superintelligence, he references the potential “superpowers” that can be attained by an AGI that is fast enough to become an ASI, Artificial Superintelligence. Intelligence amplification, for example, is where an AI can bootstrap its own intelligence. As it learns to improve its ability to learn, it will develop exponentially. In the movie Her, the Operating System, Samantha, evolved so quickly that it got bored being with one person and overnight began to interact with another 8316 people.

Another superpower is the ability to think far ahead and strategize. Humans can think ahead 10-15 moves in a game of chess, but not in an exhaustive or brute force manner, rather by a few single threaded sequences. Early chess-playing AIs played differently, doing brute force calculations of all possible sequences 3 or 4 moves ahead, and picking the one that led to the most optimal outcome. Nowadays, AI systems designed for chess can think ahead 20 moves, due mostly to the speed improvements in the underlying system. As this progresses, strategizing will be a skill that AIs can do better than humans.

Social manipulation for escaping human control, getting support, and encouraging desired courses of action coupled with hacking capabilities for stealing hardware, money and infrastructure, and escaping human control are the next superpowers than an AGI could possess. If you think otherwise, recall from Part 2 of this series that AIs have already been observed gaming specifications, or the rules under which their creators thought they were programmed. They have also unexpectedly developed apparent cognitive skills, like Theory of Mind. So their ability to get around rules to achieve their objective is already in place.

Bostrom adds technology research and economic productivity as advanced superpowers attainable by an ASI, resulting in the ability to create military forces, surveillance, space transport, or simply generating money to buy influence.

How long might it take for an AGI to evolve to an ASI? Wait But Why blogger Tim Urban posted a provocative image that shows the possibility of it happening extremely quickly. Expert estimates vary widely, from hours (an in Her) to many years.

Bostrom’s fear is that the first AGI that makes the jump will become a singleton, acquiring all resources and control. Think SkyNet. So, Albert, given all of this, will AIs soon become billions of times smarter than humans, as CNN reports?

COULDN’T WE JUST PULL THE PLUG IF THINGS START GOING SOUTH?

Yeah, why not just unplug it? To get a sense for the answer to that question, how would you unplug Google? Google’s infrastructure, shown below, comprises over 100 points of presence in 18 geographical zones. Each one has high availability technology and redundant power.

Theoretically an advanced AI could spread its brain across any number of nodes worldwide, some of which may be be solar powered, others of which may be in control of the power systems.  By the time AGI is real, high availability technology will be far advanced. You see the problem. Thoughts, Dr. Einstein?

Now that we understand the nature of the beast, and have an appreciation for the realistic capabilities of our AI frenemy, we can take a look at a possible apocalyptic scenario, courtesy of Nick Bostrom’s book, Superintelligence. Below can be seen a possible sequence of events that lead an AGI to essentially take over the world. I recommend reading the book for the details. Bostrom is a brilliant guy, and also the one who authored The Simulation Argument, which has gotten all manner of scientists, mathematicians, and philosophers in a tizzy over its logic and implications, so it is worth taking seriously.

And think of some of the technologies that we’ve developed that facilitate an operation like this… cloud computing, drones, digital financial services, social media. It all plays very well. In the next post, we will begin to examine all sorts of AI-run-amok scenarios, and assess the likelihood and severity of each.

NEXT: How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

PREVIOUS: How to Survive an AI Apocalypse – Part 1: Intro

As I mentioned in the first part of this series, in order to make any kinds of predictions about the future of AI, we must understand what Artificial Intelligence means. Unfortunately, there is so much confusing information out there. LLMs, GPTs, NAIs, AGIs, machine learning – what does it all mean? One expert say AGI will be here by the end of the year; another expert says it will never come.

Here is a simplified Venn diagram that might help to make some sense out of the landscape…

AIs are all computer programs, but, while it might be obvious, not all computer programs are AI. AI refers to programs that emulate human thinking and behavior. So, while your calculator or smart toaster might be doing some limited thinking, it isn’t really trying to be human; it is simply performing a task. AIs are generally considered to be broken into two categories – NAIs (Narrow AI) or AGI (Artificial General Intelligence).

NAIs are the ones we are all familiar with and are typically loosely categorized further: NLPs (Natural Language Processing, like Siri and Alexa, Robotics, Machine Learning (like how Spotify and Netflix learn your tastes and offer suggestions), Deep Learning, and LLMs (Large Language Models). Deep Learning systems emulate human neural networks and can complete tasks with poorly defined data and little human guidance; an example would be AlphaGo. LLMs are neural networks with many parameters (often billions), that are trained on large sets of unlabeled text using self-supervised learning. Generative Pre-trained transformers (GPTs) are a subset of LLMs and are able to generate novel human-like text, images, or even videos. ChatGPT, DALL-E, and Midjourney are examples of GPTs. The following pictures are examples of imagery created by Midjourney for my upcoming book, “Level 5.

AGIs are the ones we need to worry about, because they have a capacity to act like a human, but not really a human. Imagine giving human intelligence to an entity that has A: No implicit sense of morality or values (at least none that would make any sense to us), and B: A completely unpredictable nature. What might happen?

Well, here’s an example…

Oh, that would never happen, right? Read on…

There are thousands of examples of AIs “thinking” creatively – more creatively in fact than their creators ever imagined. Pages and pages of specification gaming examples have been logged. These are cases where the AI “gets around” the programming limitations that were imposed by the creators of the system. A small sample set is shown below:

Another example of the spontaneous emergence of intelligence involves what are known as Theory of Mind tasks. These are cognitive developments in children that reflect the understanding of other people’s mental processes. As the research in the adjacent figure demonstrates, various GPTs have unexpectedly developed such capabilities; in fact, what typically takes humans 9 years to learn has taken the AIs only 3.

These unexpected spontaneous bursts of apparent intelligence are interesting, but as we will see, they aren’t really intelligence per se. Not that it matters, if what you are worried about are the doomsday scenarios. The mere fact that they are unpredictable or non-deterministic is exactly what is frightening. So how does that happen?

There are multiple mechanisms for these spontaneous changes in intelligence. One is the Neural Net. Neural nets, while ultimately deterministic deep down, are “apparently” non-deterministic because they are not based on any programming rules. If sufficiently complex and with feedback, they are impossible to predict, at least by humans.

As shown, they consist of some input nodes and output nodes, but contain hidden layers of combinatorial arithmetic operations, which makes them nearly impossible to predict. I programmed neural nets many years ago, in an attempt to outsmart the stock market. I gave up when they didn’t do what I wanted and moved on to other ideas (I’m still searching).

Another unpredictability mechanism is the fact that not only can AIs write software very well (DeepMinds AlphaCode outperformed 47% of all human developers in 2022), they can rewrite their own software. So, blending the unpredictable nature of neural nets, the clever specification gaming capabilities that AIs have demonstrated, and their ability to rewrite their own code, we ultimately don’t really know how an AGI is going to evolve and what it might do.

The last piece of the Venn Diagram and acronym jumble is the idea of ASI – Artificial Superintelligence. This is what will happen when AGI takes over its own evolution and “improves” itself at an exponential rate, rapidly becoming far more intelligent than humans. At this point, speculate the doomsayers, ASI may treat humans the way we treat microorganisms – with complete disregard for our well being and survival.

With these kinds of ideas bantered about, it is no wonder that the media hypes Artificial Intelligence. In the next post, I’ll examine the hype and try to make sense of some of the pesky assumptions.

NEXT: How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

Is LIDA, the Software Bot, Really Conscious?

Researchers from the Cognitive Computing Research Group (CCRG) at the University of Memphis are developing a software bot known as LIDA (Learning Intelligent Distribution Agent) with what they believe to be cognition or conscious processes.  That belief rests on the idea that LIDA is modeled on a software architecture that mirrors what some believe to be the process of consciousness, called GWT, or Global Workspace Theory.  For example, LIDA follows a repetitive looping process that consists of taking in sensory input, writing it to memory, kicking off a process that scans this data store for recognizable events or artifacts, and, if something is recognized, it is broadcast to the global workspace of the system in a similar manner to the GWT model.  Timings are even tuned to more or less match human reaction times and processing delays.

I’m sorry guys, but just because you have designed a system to model the latest theory of how sensory processing works in the brain does not automatically make it conscious.  I could write an Excel macro with forced delays and process flows that resemble GWT.  Would that make my spreadsheet conscious?  I don’t THINK so.  Years ago I wrote a trading program that utilized the brain model du jour, known as neural networks.  Too bad it didn’t learn how to trade successfully, or I would be golfing tomorrow instead of going to work.  The fact is, it was entirely deterministic, as is LIDA, and there is no more reason to suspect that it was conscious than an assembly line at an automobile factory.

Then again, the standard scientific view (at least that held by most neuroscientists and biologists) is that our brain processing is also deterministic, meaning that, given the exact set of circumstances two different times (same state of memories in the brain, same set of external stimuli), the resulting thought process would also be exactly the same.  As such, so they would say, consciousness is nothing more than an artifact of the complexity of our brain.  An artifact?  I’m an ARTIFACT?

Following this reasoning from a logical standpoint, one would have to conclude that every living thing, including bacteria, has consciousness. In that view of the world, it simply doesn’t make sense to assert that there might be some threshold of nervous system complexity, above which an entity is conscious and below which it is not.  It is just a matter of degree and you can only argue about aspects of consciousness in a purely probabilistic sense; e.g. “most cats probably do not ponder their own existence.”  Taking this thought process a step further, one has to conclude that if consciousness is simply a by-product of neural complexity, then a computer that is equivalent to our brains in complexity must also be conscious.  Indeed, this is the position of many technologists who ponder artificial intelligence, and futurists, such as Ray Kurzweil.  And if this is the case, by logical extension, the simplest of electronic circuits is also conscious, in proportion to the degree in which bacteria is conscious in relation to human consciousness.  So, even an electronic circuit known as a flip-flop (or bi-stable multivibrator), which consists of a few transistors and stores a single bit of information, is conscious.  I wonder what it feels like to be a flip-flop?

Evidence abounds that there is more to consciousness than a complex system.  For one particular and very well research data point, check out Pim van Lommel’s book “Consciousness Beyond Life.”  Or my book “The Universe – Solved!”

My guess is that consciousness consists of the combination of a soul and a processing component, like a brain, that allows that soul to experience the world.  This view is very consistent with that of many philosophers, mystics, and shamans throughout history and throughout the world (which confluence of consistent yet independent thought is in itself very striking).  If true, a soul may someday make a decision to occupy a machine of sufficient complexity and design to experience what it is like to be the “soul in a machine”.  When that happens, we can truly say that the bot is conscious.  But it does not make sense to consider consciousness a purely deterministic emergent property.

hal9000_185

WikiLeaks, Denial of Service Attacks, and Nanobot Clouds

The recent firestorm surrounding WikiLeaks reminds me of one of Neal Stephenson’s visions of the future, “Diamond Age,” written back in 1995.  The web was only at its infancy, but Stephenson had already envisioned massive clouds of networked nanobots, some under control of the government, some under control of other entities.  Such nanobot swarms, also known as Utility Fogs, could be made to do pretty much anything; form a sphere of protection, gather information, inspect people and report back to a central server, or be commanded to attack each other.  One swarm under control of one organization may be at war with another swarm under the control of another organization.  That is our future.  Nanoterrorism.

A distributed denial of service attack (DDoS) is a network attack on a particular server or internet node.  It is often carried out by having thousands of computers saturate the target machine with packet requests, making it impossible for the machine to respond to normal HTTP requests, effectively bringing it to its knees, inaccessible on the internet.  The attacks are often coordinated by a central source who takes advantage of networks of already compromised computers (aka zombie computers, usually unknown to their owners) via malware inflections.  On command, these botnets initiate their attack with clever techniques called Smurf attacks, Ping floods, SYN floods, and other scary sounding events.  An entire underground industry has built up around botnets, some of which can number in the millions.  Botnets can be leased by anyone who knows how to access them and has a few hundred dollars.  As a result, an indignant group can launch an attack on, say, the WikiLeaks site.  And, in response, a WikiLeak support group can launch a counter attack on its enemies, like MasterCard, Visa, and PayPal for their plans to terminate service for WikiLeaks.  That is our present.  Cyberterrorism.

Doesn’t it sound a lot like the nanoterrorism envisioned by Stephenson?  Except it is still grounded in the hardware.  As I see it, the equation of the future is:

Nanoterrorism = Cyberterrorism + Microrobotics + Moore’s Law + 20 years.

Can’t wait!

ddos2 nanobots2_185

Jim and Craig Venter Argue over Who is more Synthetic: Synthia or Us?

So Craig Venter created synthetic life.  How cool is that?  I mean, really, this has been sort of a biologists holy grail for as long as I can remember.  Of course, Dr. Venter’s detractors are quick to point out that Synthia, the name given to this synthetic organism, was not really built from scratch, but sort of assembled from sub-living components and injected into a cell where it could replicate.  Either way, it is a huge step in the direction of man-made life forms.  If I were to meet Dr. Venter, the conversation might go something like this:

Jim: So, Dr. Venter, help me understand how man-made your little creation really is.  I’ve read some articles that state that while your achievement is most impressive, the cytoplasm that the genome was transplanted to was not man made.

Craig: True dat, Jim.  But we all need an environment to live in, and a cell is no different.  The organism was certainly man made, even if its environment already existed.

Jim: But wait a minute.  Aren’t we all man-made?  Wasn’t that the message in those sex education classes I took in high school?

Craig: No, the difference is that this is effectively a new species, created synthetically.

Jim: So, how different is that from a clone?  Are they also created synthetically?

Craig: Sort of, but a clone isn’t a new species.

Jim: How about genetically modified organisms then?  New species created synthetically?

Craig: Yes, but they were a modification made to an existing living organism, not a synthetically created one.

Jim: What about that robot that cleans my floor?  Isn’t that a synthetically created organism?

Craig: Well, maybe, in some sense, but can it replicate itself?

Jim: Ah, but that is just a matter of programming.  Factory robots can build cars, why couldn’t they be programmed to build other factory robots?

Craig: That wouldn’t be biological replication, like cell division.

Jim: You mean, just because the robots are made of silicon instead of carbon?  Seems kind of arbitrary to me.

Craig: OK, you’re kind of getting on my nerves, robot-boy.  The point is that this is the first synthetically created biological organism.

Jim: Um, that’s really cool and all, but we can build all kinds of junk with nanotech, including synthetic meat, and little self-replicating machines.

Craig: Neither of which are alive.

Jim: Define alive.

Craig: Well, generally life is anything that exhibits growth, metabolism, motion, reproduction, and homeostasis.

Jim: So, a drone bee isn’t alive because it can’t reproduce?

Craig: Of course, there are exceptions.

Jim: What about fire, crystals, or the earth itself.  All of those exhibit your life-defining properties.  Are they alive?

Craig: Dude, we’re getting way off topic here.  Let’s get back to synthetic organisms.

Jim: OK, let’s take a different tack.  Physicist Paul Davies said that Google is smarter than any human on the planet.  Is Google alive?  What about computer networks that can reconfigure themselves intelligently.

Craig: Those items aren’t really alive because they have to be programmed.

Jim: Yeah, and what’s that little code in Synthia’s DNA?

Craig: Uhhh…

Jim: And how do you know that you aren’t synthetic?  Is it at all possible that your world and all of your perceptions could be completely under programmed control?

Craig: I suppose it could be possible.  But I highly doubt it.

Jim: Doubt based on what? All of your preconceived notions about reality?

Craig: OK, let’s say we are under programmed control.  So what?

Jim: Well, that implies a creator.  Which in turn implies that our bodies are a creation.  Which makes us just as synthetic as Synthia.  The only difference is that you created Synthia, while we might have been created by some highly advanced geek in an other reality.

Craig: Been watching a few Wachowski Brothers movies, Jim?

Jim: Guilty as charged, Craig.

CraigVenterGod