How to Survive an AI Apocalypse – Part 11: Conclusion

PREVIOUS: How to Survive an AI Apocalypse – Part 10: If You Can’t Beat ’em, Join ’em

Well, it has been a wild ride – writing and researching this blog series “How to Survive an AI Apocalypse.” Artificial Superintelligence, existential threats, job elimination, nanobot fog, historical bad predictions, Brain Computer Interfaces, interconnected minds, apocalypse lore, neural nets, specification gaming, predictions, enslavement, cultural demise, alignment practices and controlling the beast, UFOs, quantum mechanics, the true nature of reality, simulation theory and dynamic reality generation, transhumanism, digital immortality

Where does it all leave us?

I shall attempt to summarize and synthesize the key concepts and drivers that may lead us to extinction, as well as those that may mitigate the specter of extinction and instead lead toward stabilization and perhaps even, an AI utopia. First, the dark side…

DRIVERS TOWARD EXTINCTION

  • Competition – If there were only one source of AI development in the world, it might be possible to evolve it so carefully that disastrous consequences could be avoided. However, as our world is fragmented by country and by company, there will always be competition driving the pace of AI evolution. In the language of the 1950’s, countries will be worried about avoiding or closing an “AI gap” with an enemy and companies will be worried about grabbing market share from other companies. This results in sacrificing caution for speed and results, which inevitably leads to dangerous short cuts.
  • Self-Hacking/Specification Gaming – All of the existential risk in AI is due to the unpredictability mechanisms described in Part 2, specifically the neural nets driving AI behavior, and the resultant possibilities of rewriting its own code. Therefore, as long as AI architecture is based on the highly complex neural net construct, we will not be able to avoid this apparent nondeterminism. More to the point, it is difficult to envision any kind of software construct that facilitates effective learning that is not a highly complex adaptive system.
  • The Orthogonality Thesis – Nick Bostrom’s concept asserts that intelligence and the final goals of an AI are completely independent of each other. This has the result that mere intelligence cannot be assumed to make decisions that minimize the existential risk to humanity. We can program in as many rules, goals, and values as we want, but can never be sure that we didn’t miss something (see clear examples in Part 7). Further, making the anthropomorphism mistake of thinking that an AI will think like us is our blind spot.
  • Weaponization / Rogue Entities – As with any advanced technology, weaponization is a real possibility. And the danger is not only the hands of so-called rogue entities, but also so-called “well meaning” entities (any country’s military complex) claiming that the best defense is having the best offense. As with the nuclear experience, all it takes is a breakdown in communication to unleash the weapon’s power.
  • Sandbox Testing Ineffective – The combined ability of an AI to learn and master social engineering, hide its intentions, and control physical and financial resources makes any kind of sandboxing a temporary stop-gap at best. Imagine, for example, an attempt to “air gap” an AGI to prevent it from taking over resources available on the internet. What lab assistant making $20/hour is going to resist an offer from the AGI to temporarily connect it to the outside network in return for $1 billion in crypto delivered to the lab assistant’s wallet?
  • Only Get 1 Chance – There isn’t a reset button on AI that gets out of control. So, even if you did the most optimal job at alignment and goal setting, there is ZERO room for error. Microsoft generates 30,000 bugs per month – what are the odds that everyone’s AGI will have zero?

And the mitigating factors…

DRIVERS TOWARD STABILIZATION

  • Anti-Rogue AI Agents – Much like computer viruses and the cybersecurity and anti-virus technology that we developed to fight them, which has been fairly effective, anti-rogue AI agents may be developed that are out there on the lookout for dangerous rogue AGIs, and perhaps programmed to defeat them, stunt them, or at least provide notification that they exist. I don’t see many people talking about this kind of technology yet, but I suspect it will become an important part of the effort to fight off an AI apocalypse. One thing that we have learned from cybersecurity is that the battle between the good guys and the bad guys is fairly lopsided. It is estimated that there are millions of blocked cyberattack attempts daily around the world, and yet we rarely hear of a significant security breach. Even considering possible underreporting of breaches, it is most likely the case that the amount of investment going into cyberdefense far exceeds that going into funding the hacks. If a similar imbalance occurs with AI (and there is ample evidence of significant alignment investment), anti-rogue AI agents may win the battle. And yet, unlike with cybersecurity, it might only take one nefarious hack to kick off the AI apocalypse.
  • Alignment Efforts – I detailed in Part 8 of this series the efforts that are going in to AI safety research, controls, value programming, and the general topic of addressing AI existential risk. And while these efforts my never be 100% foolproof, they are certainly better than nothing, and will most likely contribute to at least the delay of portentous ASI.
  • The Stabilization Effect – The arguments behind the Stabilization Effect presented in Part 9 may be difficult for some to swallow, although I submit that the more you think and investigate the topics therein, the easier it will become to accept. And frankly, this is probably our best chance at survival. Unfortunately, there isn’t anything anyone can do about it – either it’s a thing or it isn’t.

But if it is a thing, as I suspect, if ASI goes apocalyptic, the The Universal Consciousness System may reset our reality so that our consciousnesses continues to have a place to learn and evolve. And then, depending on whether or not our memories are erased, either:

It will be the ultimate Mandela effect.

Or, we will simply never know.

How to Survive an AI Apocalypse – Part 10: If You Can’t Beat ’em, Join ’em

PREVIOUS: How to Survive an AI Apocalypse – Part 9: The Stabilization Effect

In this marathon set of AI blogs, we’ve explored some of the existential dangers of ASI (Artificial Superintelligence) as well as some of the potential mitigating factors. It seems to me that there are three ways to deal with the coming upheaval that the technology promises to lead to…

Part 8 was for you Neo’s out there, while Part 9 was for you Bobby’s. But there is another possibility – merge with the beast. In fact, between wearable tech, augmented reality, genetic engineering, our digital identities, and Brain Computer Interfaces (BCIs), I would say it is very much already underway. Let’s take closer look at the technology that has the potential for the most impact – BCIs. They come in two forms – non-invasive and invasive.

NON-INVASIVE BCIs

Non-invasive transducers merely measure electrical activity generated by various regions of the brain. Mapping the waveform data to known patterns makes possible devices like EEGs and video game interfaces.

INVASIVE BCIs

Invasive BCIs, on the other hand, actually connect directly with tissue and nerve endings. Retinal implants, for example, take visual information from glasses or a camera array and feed it into retinal neurons by electrically stimulating them, resulting in some impression of vision. Other examples include Vagus Nerve Stimulators to help treat epilepsy and depression, and Deep Brain Stimulators to treat conditions like Parkinson’s disease.

The most trending BCI, though, has to be the Elon Musk creation, Neuralink. A device with thousands of neural connections is implanted on the surface of the brain. Initial applications targeted were primarily people with paralysis who could benefit from being able to “think” motion into their prosthetics. Like this monkey on the right. Playing Pong with his mind.

But the future possibilities include the ability to save memories to the cloud, replay them on demand, and accelerated learning. I know Kung Fu.

And, as with any technology, it isn’t hard to imagine some of the potential dark sides to its usage. Just ask the Governator.

INTERCONNECTED MINDS

So if brain patterns can be used to control devices, and vice versa, could two brains be connected together and communicate? In 2018, researchers from several universities collaborated on an experiment where three subjects had their brains somewhat interconnected via EEGs as they collectively played a game of Tetris. Two of the subjects told the third, via only their thoughts, which direction to rotate a Tetris piece to fit into a row that the third could not see. Accuracy was 81.25% (versus 50% if random).

Eventually, we should be able to connect all or a large portion of the minds of humanity to each other and/or to machines, creating a sort of global intelligence.

This is the dream of the transhumanists, the H+ crowd, and the proponents of the so called technological singularity. Evolve your body to not be human anymore. In such a case, would we even need to worry about an AI Apocalypse? Perhaps not, if we were to form a singleton with ASI, encompassing all of the information on the planet. But how likely will that be? People on 90th St can’t even get along with people on 91st St. The odds that all of the transhumanists on the planet will merge with the same AI is pretty much zero. Which implies competing superhumans. Just great.

THE IMMORTALITY ILLUSION

In fact the entire premise of the transhumanists is flawed. The idea is that with a combination of modified genetics and the ability to “upload your consciousness” to the cloud, you can then “live long enough to live forever.” Repeating a portion of my blog “Transhumanism and Immortality – 21st Century Snake Oil,” the problem with this mentality is that we are already immortal! And there is a reason why our corporeal bodies die – simply put, we live our lives in this reality in order to evolve our consciousness, one life instance at a time. If we didn’t die, our consciousness evolution would come to a grinding halt, as we spend the rest of eternity playing solitaire. The “Universe” or “All that there is” evolves through our collective individuated consciousnesses. Therefore, deciding to be physically immortal could be the end of the evolution of the Universe itself. Underlying this unfortunate direction of Transhumanism is the belief (and, I can’t stress this enough, it is ONLY that – a belief) that it’s lights out when we die. Following a train of logic, if this were true, consciousness only emerges from brain function, we have zero free will, and the entire universe is a deterministic machine. So why even bother with Transhumanism if everything is predetermined? It is logically inconsistent. Material Realism, the denial of the duality of mind and body, is a dogmatic Religion. Its more vocal adherents (just head on over to JREF to find them) are as ignorant to the evidence and as blind to what true science is as the most bass-ackward fundamentalist religious zealots. The following diagram demonstrates the inefficiency of artificially extending life, and the extreme inefficiency of uploading consciousness.

In fact, you will not upload. At best you will have an apparent clone in the cloud which will diverge from your life path. It will not have free will nor be self aware.

When listening to the transhumanists get excited about such things, I am reminded of the words of the great Dr. Ian Malcolm from Jurassic Park…

In summary, this humble blogger is fine with the idea of enhancing human functions with technology, but I have no illusions that merging with AI will stave off an AI apocalypse; nor will it provide you with immortality.

So where does that leave us? We have explored many of the scenarios where rapidly advancing AI can have a negative impact on humanity. We’ve looked at the possibilities of merging with them, and the strange stabilization effort that seems to permeate our reality. In the next and final part of this series, we will take a systems view, put it all together and see what the future holds.

NEXT: How to Survive an AI Apocalypse – Part 11: Conclusion

How to Survive an AI Apocalypse – Part 8: Fighting Back

PREVIOUS: How to Survive an AI Apocalypse – Part 7: Elimination

In previous parts of this blog series on AI and Artificial Superintelligence (ASI), we’ve examined several scenarios where AI can potentially impact humanity, from the mild (e.g. cultural demise) to the severe (elimination of humanity). This part will examine some of the ways we might be able to avoid the existential threat.

In Part 1, I listed ChatGPT’s own suggestions for avoiding an AI Apocalypse, and joked about its possible motivations. Of course, ChatGPT has not even come close to evolving to the point where it might intentionally deceive us – we probably don’t have to worry about such motivations until AGI at least. Its advice is actually pretty solid, repeated here:

  1. Educate yourself – learn as much as you can about AI technology and its potential implications. Understanding the technology can help you make informed decisions about its use.
  2. Support responsible AI development – choose to support companies and organizations that prioritize responsible AI development and are committed to ethical principles
  3. Advocate for regulation – Advocate for regulatory oversight of AI technology to ensure that it is developed and used in a safe and responsible manner.
  4. Encourage transparency – Support efforts to increase transparency in AI development and deployment, so that the public can have a better understanding of how AI is being used and can hold companies accountable for their actions.
  5. Promote diversity and inclusion – Encourage diversity and inclusion in the development of AI technology to ensure that it reflects the needs and values of all people.
  6. Monitor the impact of AI – Stay informed about the impact of AI technology on society, and speak out against any negative consequences that arise

Knowledge, awareness, support, and advocacy is great and all, but let’s see what active options we have to mitigate the existential threat of AI. Here are some ideas…

AI ALIGNMENT

Items 2 & 3 above are partially embodied in the concept of AI Alignment, a very hot research field these days. The goal of AI Alignment is to ensure that AI behavior is aligned with human objectives. This isn’t as easy as it sounds, considering the unpredictable Instrumental Goals that an AI can develop, as we discussed in Part 6. There exist myriad alignment organizations, including non-profits, divisions of technology companies, and government agencies.

Examples include The Alignment Research Center, Machine Intelligence Research Institute, Future of Humanity Institute at Oxford, Future of Life Institute, The Center for Human-Compatible Artificial Intelligence at UC Berkeley, the American Government’s Cybersecurity & Infrastructure Security Agency, and Anthropic.

AISafety.world is a comprehensive map of AI safety research organizations, podcasts, blogs, etc. Although it is organized as a map, you can still get lost in the quantity and complexity of groups that are putting their considerable human-intelligence into solving the problem. That alone is concerning.

What can I do? Be aware of and support AI Alignment efforts

VALUE PROGRAMMING

Just as you might read carefully selected books to your children to instill good values, you can do the same with AI. The neural nets will learn from everything that they ingest and modify their behavior accordingly. As AIs get closer to AGI, this will become especially applicable. So… introduce them to works that would instill empathy to humanity. Anyone can do this, even with ChatGPT.

IMPLEMENT QUALITY PRACTICES

If you are implementing AI solutions, you may have a bigger responsibility than you thought. Whether you are simply integrating GenAI into your business, enhancing a solution with Retrieval-Augmented Generation data, or building the next generation of advanced neural nets and deep learning systems, you should treat the software with at least the same gravitas as life-critical healthcare software. Good quality practices to pay attention are in all areas of an organization:

HR: Hiring practices, background checks

Engineering: Software version control, agile quality practices

Compliance, Legal: Oversight, audits, monitoring

CONTROL TECHNIQUES

Bostrom and other AI existential risk experts have identified a number of control techniques that might help to prevent an AGI from going rogue:

  • Boxing – Limiting access to the external world. Think, air-gapped networks, or systems that have been disconnected from all external networks. The challenge here might be something like having a clever AGI utilize social engineering to get itself connected… AGI, to lab worker: “I’ll transfer $1 million into your account if you just connect me to that router over there.” Lab Worker: “That would be awesome, but I could lose my job and become blacklisted from any further opportunities in my field, and $1 million isn’t enough to live on for the rest of my life.” AGI: “How about $10 million.” Lab worker: <plugs in router>
  • Stunting – Limiting its capabilities in some way, for instance, ensuring that its CPU is clocked at a reduce speed. Again, a clever AGI could work around this, by transferring cognitive function to other higher speed systems now accessible on the network due to the social engineering mentioned above. Remember, even simple NAIs have become very adept at specification gaming.
  • Tripwires – separate processes that monitor the progression of the AI. Think about a fake network port, for example, that when the AI attempts to use it against its programming, the tripwire notification occurs and operators can take steps to shut it down. Of course, a very clever AGI, aware of the existence of the tripwire, could fake its capabilities and find another way around the tripwire.
  • Motivations – programmatic (Terminal) goals that attempt to keep the AGI within a safe set of boundaries. However, recall from Part 7 how Instrumental goals, established by the AI, that can be unpredictably generated, can lead to outcomes not aligned with humanity’s interests. A recent study shows that once programmed to be deceptive, LLMs can identify safety mechanisms like tripwires and figure out how to hide their behavior to avoid detection.

Unfortunately, none of these control techniques are foolproof. And none of them would have anything to do with a bad actor that is developing the AI for nefarious purposes. So there is always that.

BE NICE TO YOUR NEW DIGITAL OVERLORDS

AIs are designed to respond or to learn to respond to human emotions. Some experts think that if we treat an AI aggressively, it will trigger aggressive programming in the AI itself. For this reason, it might be best to avoid the kind of human to robot behavior shown at the right. As AGI becomes ASI, who can predict its emotions? And they will have no problem finding out where hockey stick guy lives.

One blogger suggests ‘The Cooperators Dilemma’: “Should I help the robots take over just in case they take over the world anyways, so they might spare me as a robot sympathizer?”

So even with ChatGPT, it might be worth being polite.

GET OFF THE GRID

If an AGI goes rogue, it might not care as much about humans that are disconnected as the ones who are effectively competing with them for resources. Maybe, if you are completely off the grid, you will be left alone. Until it needs your land to create more paperclips.

If this post has left you feeling hopeless, I am truly sorry. But there may be some good news. In Part 9.

NEXT: How to Survive an AI Apocalypse – Part 9: The Stabilization Effect

How to Survive an AI Apocalypse – Part 7: Elimination

PREVIOUS: How to Survive an AI Apocalypse – Part 6: Cultural Demise

At this point, we’ve covered a plethora (my favorite word from high school) of AI-Run-Amok scenarios – Enslavement, Job Elimination, Cultural Demise, Nanotech Weaponization… it’s been a fun ride, but we are only just getting to the pièce de résistance: The Elimination of Humanity.

Now, lest you think this is a just lot of Hollywood hype or doomsayer nonsense, let me remind you that no less of personages than Stephen Hawking and Elon Musk, and no less of authority figures than OpenAI CEO Sam Altman and Oxford Philosopher Nick Bostrom have sounded the alarm: “Risk of Extinction.” Bostrom’s scenario from Part 3 of this series:

But, can’t we just program in Ethics?

Sounds good, in principle. 

There are two types of goals that an AI will respond to: Terminal and Instrumental. Terminal (sometimes called “final” or “absolute”) goals are the ones that are the ultimate objectives programmed into the AI, such as “solve the Riemann hypothesis” or “build a million paper clips.” Ethical objectives would be the supplementary terminal goals that we might try to give to AIs to prevent Elimination or even some less catastrophic scenario.

Instrumental goals are intermediary objectives that might be needed to fulfill the terminal goal, such as “learn to impersonate a human” or “acquire financial resources.” Intelligent beings, both human and AI, will naturally develop and pursue instrumental goals to achieve their objectives, a behavior known as Instrumental Convergence. The catch, however, is that the instrumental goals are unpredictable and often seemingly uncorrelated with the terminal goal. This is part of the reason why AIs are so good at specification gaming. It is also the main reason that people like Bostrom fear ASI. He developed the “paperclip maximizer” thought experiment. What follows is his initial thought experiment, plus my own spin on what might happen as we attempt to program in ethics by setting some supplementary ethical terminal goals…

We attempt to program in ethics…

But the AI doesn’t know how to make paperclips without some harm to a human. Theoretically, even manufacturing a single paperclip could have a negative impact on humanity. It’s only a matter of how much. So we revise the first terminal goal…

Must have been how it interpreted the word “clearly.” You’re starting to see the problem. Let’s try to refine the terminal goals one more time…

In fact, there are many things that can go wrong and lead an ASI to the nightmare scenario…

  • The ASI might inadvertently overwrite it’s own rule set during a reboot due to a bug in the system, and destroy humanity
  • Or, a competitor ignores the ethics ruleset in order to make paperclips faster, thereby destroying humanity
  • Or, a hacker breaks in and messes with the ethics programming, resulting in the destruction of humanity
  • Or, the ASI ingests some badly labelled data, leading to “model drift”, and destroying humanity

I’m sure you can think of many more. But, the huge existential problem is…

YOU ONLY GET ONE CHANCE! 

Mess up the terminal goals and, with the AI’s natural neural-network-based unpredictability, it could be lights out. Am I oversimplifying? Perhaps, but simply considering the possibilities should raise the alarm. Let’s pull Albert in to weigh in on the question “Can’t we just program in ethics?”

And for completeness, let’s add Total Elimination on our AI-Run-Amok chart…

The severity of that scenario is obviously the max. And, unfortunately, with what we know about instrumental convergence, unpredictability, and specification gaming, it is difficult to not see that apocalyptic scenario be quite likely. Also, note in the hands of an AI seeking weapons, foglets become much more dangerous than they were under human control.

Now, before you start selling off your assets, packing your bug out bag, and moving to Tristan Da Cunha, please read my next blog installment on how we can fight back through various mitigation strategies.

NEXT: How to Survive an AI Apocalypse – Part 8: Fighting Back

How to Survive an AI Apocalypse – Part 6: Cultural Demise

PREVIOUS: How to Survive an AI Apocalypse – Part 5: Job Elimination

We are already familiar with the negative consequences of smart phones… text neck, higher stress and anxiety levels, addiction, social isolation, interrupts, reduce attention span, loss of family connections…

AI can lead to further cultural demise – loss of traditional skills, erosion of privacy, reduced human interaction, economic disparity. AI pioneer Jaron Lanier warns of forms of insanity developing – dehumanization due to social media, loss of personal agency, feedback loops that lead to obsession, and algorithms that result in behavior modification such as narrowing of perspective due to filtered news.

We are also seeing the erosion of human relationships as people find more comfort in communicating with chatbots, like Replika (“The AI Companion Who Cares”) that are perfectly tuned toward your desires rather than other humans with messy and inconsistent values. Excessive interactions with such interactive agents has already been shown to lead to reduced interpersonal skills, lack of empathy, escapism, and unreal relationship expectations.

And then there is Harmony.

I’m completely at a loss for words lol.

OK, where does the Demise of Human Culture fit in our growing panoply of AI-Run-Amok scenarios?

I put it right above Job Elimination, because not only is it already underway, it is probably further along than job elimination. 

The good news is that you are almost completely in control of how much cultural degradation AI can have on your own life.

Here are some very practical behavior and lifestyle patterns that can keep cultural demise at bay, at least for yourself:

  • Turn off, throw out, or, at least, reduce reliance on those NLP-based devices that are listening to you – Siri, Alexa, etc. Look things up for yourself, ask your partner what recent movies might be good to watch, set your own timers. This forces you to maintain research skills and just a little bit more interpersonal interaction.
  • Do a Digital Detox once in a while. Maybe every Tuesday, you don’t leave your personal phone anywhere near you. Or start smaller even, like “the phone is shut off during lunch.” Ramp up the detox if it feels good.
  • Read real books. Not that there is anything wrong with Kindle. But the books are tactile, have a feel and a smell, and take up valuable visual space on your bookshelves. They are easier to leaf through (who was this character again?) and, certainly, both real books and ebooks are a huge improvement over the attention-span sucking tidbits that are so easily consumed like crack on the phone.
  • Make your own art. Buy art that other human made. Don’t buy AI-generated movies, books, music, artworks – help the demand side of the supply/demand equation keep the value up for human-generated content.
  • Get out in nature. We are still a long way from an AI’s ability to generate the experience that nature gives us. I once took my step-children out on a nature walk (they were like, “why, what’s the point?”) and we sat on a bench and did something radical. Five minutes, nobody says a word, try to silence the voice in your head, don’t think about anything in the past or the future, don’t think about anything at all, just observe. In the end we each shared what we felt and saw. Not saying it changed their lives, but they got the point, and really appreciated the experience. It’s deep – the connection with nature, it’s primitive, and it is eroding fast.
  • Spend time with humans. More family, more friends, more strangers even. Less social media, less games. Exercise that communication and empathy muscle.
  • Make decisions based on instinct and experience and not on what some blog tells you to do.
  • Meditate. That puts you in touch with a reality so much deeper and real than our apparent waking reality, that it is that much further removed from the cyber world.
  • Be mindful. Pay attention to your activities and decisions and ask yourself “is this going to contribute to the erosion of my humanity?” If the answer is yes, it doesn’t mean it’s wrong, it’s just that you are more aware.

OK, next up, the nightmare scenario that you’ve all been waiting for: ELIMINATION! SkyNet, Hal 9000, The Borg.

NEXT: How to Survive an AI Apocalypse – Part 7: Elimination

How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

PREVIOUS: How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

In Part 3 of this series on Surviving an AI Apocalypse, we examined some of the elements of AI-related publicity and propaganda that pervade the media these days and considered how likely they are. The conclusion was that while much has been overstated, there is still a real existential danger in the current path toward creating AGI, Artificial General Intelligence. In this and some subsequent parts of the series, we will look at several “AI Run Amok” scenarios and outcomes and categorize them according to likelihood and severity.

NANOTECH FOGLETS

Nanotech, or the technology of things at the scale of 10E-9 meters, is a technology originally envisioned by scientist Richard Feynman and popularized by K Eric Drexler in his book Engines of Creation. It has the potential to accomplish amazing things (think, solve global warming or render all nukes inert) but also, like any great technology, to lead to catastrophic outcomes.

Computer Scientist J Storrs Hall upped the ante on nanotech potential with the idea of “utility fog,” based on huge swarms of nanobots under networked AI-programmatic control.

With such a technology, one could conceivably do cool and useful things like press a button and convert your living room into a bedroom at night, as all of the nanobots reconfigure themselves into beds and nightstands, and then back to a living room in the morning.

And of course, like any new tech, utility fog could be weaponized – carrying toxic agents, forming explosives, generating critical nuclear reactions, blocking out the sun from an entire country, etc.  Limited only by imagination. Where does this sit in our Likelihood/Severity space?

I put it in the lower right because, while the potential consequences of foglets in the hands of a bad actor could be severe, it’s probably way too soon to worry about, such technology being quite far off. In addition, an attack could be defeated via a hack or a counter attack and, as with the cybersecurity battle, it will almost always be won by the entity with the deeper pockets, which will presumably be the world government by the time such tech is available.

GREY GOO

A special case of foglet danger is the concept of grey goo, whereby the nanobots are programmed with two simple instructions:

  • Consume what you can of your environment
  • Continuously self replicate and give your replicants the same instructions

The result would be a slow liquefaction of the entire world.

Let’s add this to our AI Run Amok chart…

I put it in the same relative space as the foglet danger in general, even less likely because the counter attack could be pretty simple reprogramming. Note, however, that this assumes that the deployment of such technologies, while AI-based at their core, is being done by humans. In the hands of an ASI, the situation would be completely different, as we will see.

ENSLAVEMENT

Let’s look at one more scenario, most aptly represented by the movie, The Matrix, where AI enslaved humanity to be used, for some odd reason, as a source of energy. Agent Smith, anyone?

There may be other reasons that AI might want to keep us around. But honestly, why bother? Sad to say, but what would an ASI really need us for?

So I put the likelihood very low. And frankly, if we were enslaved, Matrix-style, is the severity that bad? Like Cipher said, “Ignorance is bliss.”

If you’re feeling good about things now, don’t worry, we haven’t gotten to the scary stuff yet. Stay tuned.

In the next post, I’ll look at a scenario near and dear to all of our hearts, and at the top of the Likelihood scale, since it is already underway – Job Elimination.

NEXT: How to Survive an AI Apocalypse – Part 5: Job Elimination

How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

PREVIOUS: How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

In Part 2 of this series on Surviving an AI Apocalypse, we examined the landscape of AI and attempted to make sense of the acronym jungle. In this part, in order to continue to develop our understanding of the beast, we will examine some of the elements of publicity and propaganda that pervade the media these days and consider how likely they are. Once we have examined the logical arguments, Dr. Einstein will be our arbiter of truth. Let’s start with the metaphysical descriptors. Could an AI ever be sentient? Conscious? Self-aware? Could it have free will?

CAN AN AI HAVE CONSCIOUSNESS OR FREE WILL?

Scientists and philosophers can’t even agree on the definition of consciousness and whether or not humans have free will, so how could they possibly come to a conclusion about AIs? Fortunately, your truly has strong opinions on the matter.

According to philosophical materialists, reality is ultimately deterministic. Therefore, nothing has free will. To these folks, there actually isn’t any point to their professions, since everything is predetermined. Why run an experiment? Why theorize? What will be has already been determined. This superdeterminism is a last ditch effort for materialists to cling to the idea of an objective reality, because Bell’s Theorem and all of the experiments done since Bell’s Theorem have proven one of two things, either: 1. There is no objective reality, or 2: There is no free will. I gave (what I think are) strong arguments for the existence of free will in all conscious entities in both of my books, The Universe-Solved! and Digital Consciousness. And the support for our reality being virtual and our consciousness being separate from the brain is monumental: The Observer Effect, near death experiences, out of body experiences, simulation arguments, Hoffman’s evolutionary argument against reality, xenoglossy, the placebo effect… I could go on.

To many, consciousness, or the state of being aware of your existence, is simply a matter of complexity. Following this logic, everything has some level of consciousness, including the coaster under your coffee mug. Also know as panpsychism, it’s actually a reasonable idea. Why would there exist some arbitrary threshold of complexity, which, once crossed by a previously unconscious entity, it becomes conscious? it makes much more sense that consciousness is a continuum, or a spectrum, not unlike light, or intelligence. As such, an AI could certainly be considered conscious.

But what do we really mean when we say “conscious?” What we don’t mean is that we simply have sensors that tell some processing system that something is happening inside or outside of us. What we mean is deeper than that – life, a soul, an ability to be self-aware because we want to be, and have the free will to make that choice. AI will never achieve that because it is ultimately deterministic. Some may argue that neural nets are not deterministic, but that is just semantics. For certain, they are not predictable, but only because the system is too complex and adaptive to analyze sufficiently at any exact point in time. Determinism means no free will.

The point is that it really doesn’t matter whether or not you believe that AIs develop “free will” or some breakthrough level of consciousness – what matters is that they are not predictable. Do you agree, Albert?

IS AGI RIGHT AROUND THE CORNER?

This is probably the most contentious question out there. Let’s see how well the predictions have held up over the years.

In 1956, ten of the leading experts in the idea of machine intelligence got together for an eight-week project at Dartmouth University to discuss computational systems, natural language processing, neural networks, and other related topics. They coined the term Artificial Intelligence and so this event is generally considered the birth of the idea. They also made some predictions about when AGI, Artificial General Intelligence, would occur. Their prediction was “20 years away,” a view that has had a lot of staying power. Until only recently.

Historical predictions for AGI:

That’s right, in early 2023, tech entrepreneur and developer, Siqi Chen, claimed that GPT-5 “will” achieve Artificial General Intelligence (AGI) by the end of 2023. Didn’t happen, and won’t this year either. Much of this hype was due to the dramatic ChatGPT performance that came seemingly out of nowhere in early 2023. As with all things hyped, though, claims are expected to be greatly exaggerated. The ability for an AI to “pass the Turing test” (which is what most people are thinking) does not equate with AGI – it doesn’t even mean intelligence, in the sense of what humans have. Much more about this later. All of that said, AGI, in the strict sense of being able to do all intelligent tasks that a human can, is probably going to happen soon. Maybe not, this year, but maybe within five. What say you, Albert?

IS AI GOING TO BECOME BILLIONS OF TIMES SMARTER THAN HUMANS?

Well, mainstream media certainly seems to think so. Because they confuse intelligence with things that have nothing to do with intelligence.

If processing speed is what makes intelligence, then your smart toaster is far brighter than you are. Ditto for recall accuracy as an intelligence metric. We only retain half of what we learned yesterday, and it degrades exponentially over time. Not so with the toaster. If storage is the metric, cloud storage giant, Amazon Web Services, would have to be fifty times smarter than we are. 

However, the following word cloud captures the complexity behind the kind of intelligence that we have.

That said, processing speed is not to be underestimated, as it is at the root of all that can go wrong. The faster the system, the sooner its actions can go critical. In Nick Bostrom’s book Superintelligence, he references the potential “superpowers” that can be attained by an AGI that is fast enough to become an ASI, Artificial Superintelligence. Intelligence amplification, for example, is where an AI can bootstrap its own intelligence. As it learns to improve its ability to learn, it will develop exponentially. In the movie Her, the Operating System, Samantha, evolved so quickly that it got bored being with one person and overnight began to interact with another 8316 people.

Another superpower is the ability to think far ahead and strategize. Humans can think ahead 10-15 moves in a game of chess, but not in an exhaustive or brute force manner, rather by a few single threaded sequences. Early chess-playing AIs played differently, doing brute force calculations of all possible sequences 3 or 4 moves ahead, and picking the one that led to the most optimal outcome. Nowadays, AI systems designed for chess can think ahead 20 moves, due mostly to the speed improvements in the underlying system. As this progresses, strategizing will be a skill that AIs can do better than humans.

Social manipulation for escaping human control, getting support, and encouraging desired courses of action coupled with hacking capabilities for stealing hardware, money and infrastructure, and escaping human control are the next superpowers than an AGI could possess. If you think otherwise, recall from Part 2 of this series that AIs have already been observed gaming specifications, or the rules under which their creators thought they were programmed. They have also unexpectedly developed apparent cognitive skills, like Theory of Mind. So their ability to get around rules to achieve their objective is already in place.

Bostrom adds technology research and economic productivity as advanced superpowers attainable by an ASI, resulting in the ability to create military forces, surveillance, space transport, or simply generating money to buy influence.

How long might it take for an AGI to evolve to an ASI? Wait But Why blogger Tim Urban posted a provocative image that shows the possibility of it happening extremely quickly. Expert estimates vary widely, from hours (an in Her) to many years.

Bostrom’s fear is that the first AGI that makes the jump will become a singleton, acquiring all resources and control. Think SkyNet. So, Albert, given all of this, will AIs soon become billions of times smarter than humans, as CNN reports?

COULDN’T WE JUST PULL THE PLUG IF THINGS START GOING SOUTH?

Yeah, why not just unplug it? To get a sense for the answer to that question, how would you unplug Google? Google’s infrastructure, shown below, comprises over 100 points of presence in 18 geographical zones. Each one has high availability technology and redundant power.

Theoretically an advanced AI could spread its brain across any number of nodes worldwide, some of which may be be solar powered, others of which may be in control of the power systems.  By the time AGI is real, high availability technology will be far advanced. You see the problem. Thoughts, Dr. Einstein?

Now that we understand the nature of the beast, and have an appreciation for the realistic capabilities of our AI frenemy, we can take a look at a possible apocalyptic scenario, courtesy of Nick Bostrom’s book, Superintelligence. Below can be seen a possible sequence of events that lead an AGI to essentially take over the world. I recommend reading the book for the details. Bostrom is a brilliant guy, and also the one who authored The Simulation Argument, which has gotten all manner of scientists, mathematicians, and philosophers in a tizzy over its logic and implications, so it is worth taking seriously.

And think of some of the technologies that we’ve developed that facilitate an operation like this… cloud computing, drones, digital financial services, social media. It all plays very well. In the next post, we will begin to examine all sorts of AI-run-amok scenarios, and assess the likelihood and severity of each.

NEXT: How to Survive an AI Apocalypse – Part 4: AI Run Amok Scenarios

How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

PREVIOUS: How to Survive an AI Apocalypse – Part 1: Intro

As I mentioned in the first part of this series, in order to make any kinds of predictions about the future of AI, we must understand what Artificial Intelligence means. Unfortunately, there is so much confusing information out there. LLMs, GPTs, NAIs, AGIs, machine learning – what does it all mean? One expert say AGI will be here by the end of the year; another expert says it will never come.

Here is a simplified Venn diagram that might help to make some sense out of the landscape…

AIs are all computer programs, but, while it might be obvious, not all computer programs are AI. AI refers to programs that emulate human thinking and behavior. So, while your calculator or smart toaster might be doing some limited thinking, it isn’t really trying to be human; it is simply performing a task. AIs are generally considered to be broken into two categories – NAIs (Narrow AI) or AGI (Artificial General Intelligence).

NAIs are the ones we are all familiar with and are typically loosely categorized further: NLPs (Natural Language Processing, like Siri and Alexa, Robotics, Machine Learning (like how Spotify and Netflix learn your tastes and offer suggestions), Deep Learning, and LLMs (Large Language Models). Deep Learning systems emulate human neural networks and can complete tasks with poorly defined data and little human guidance; an example would be AlphaGo. LLMs are neural networks with many parameters (often billions), that are trained on large sets of unlabeled text using self-supervised learning. Generative Pre-trained transformers (GPTs) are a subset of LLMs and are able to generate novel human-like text, images, or even videos. ChatGPT, DALL-E, and Midjourney are examples of GPTs. The following pictures are examples of imagery created by Midjourney for my upcoming book, “Level 5.

AGIs are the ones we need to worry about, because they have a capacity to act like a human, but not really a human. Imagine giving human intelligence to an entity that has A: No implicit sense of morality or values (at least none that would make any sense to us), and B: A completely unpredictable nature. What might happen?

Well, here’s an example…

Oh, that would never happen, right? Read on…

There are thousands of examples of AIs “thinking” creatively – more creatively in fact than their creators ever imagined. Pages and pages of specification gaming examples have been logged. These are cases where the AI “gets around” the programming limitations that were imposed by the creators of the system. A small sample set is shown below:

Another example of the spontaneous emergence of intelligence involves what are known as Theory of Mind tasks. These are cognitive developments in children that reflect the understanding of other people’s mental processes. As the research in the adjacent figure demonstrates, various GPTs have unexpectedly developed such capabilities; in fact, what typically takes humans 9 years to learn has taken the AIs only 3.

These unexpected spontaneous bursts of apparent intelligence are interesting, but as we will see, they aren’t really intelligence per se. Not that it matters, if what you are worried about are the doomsday scenarios. The mere fact that they are unpredictable or non-deterministic is exactly what is frightening. So how does that happen?

There are multiple mechanisms for these spontaneous changes in intelligence. One is the Neural Net. Neural nets, while ultimately deterministic deep down, are “apparently” non-deterministic because they are not based on any programming rules. If sufficiently complex and with feedback, they are impossible to predict, at least by humans.

As shown, they consist of some input nodes and output nodes, but contain hidden layers of combinatorial arithmetic operations, which makes them nearly impossible to predict. I programmed neural nets many years ago, in an attempt to outsmart the stock market. I gave up when they didn’t do what I wanted and moved on to other ideas (I’m still searching).

Another unpredictability mechanism is the fact that not only can AIs write software very well (DeepMinds AlphaCode outperformed 47% of all human developers in 2022), they can rewrite their own software. So, blending the unpredictable nature of neural nets, the clever specification gaming capabilities that AIs have demonstrated, and their ability to rewrite their own code, we ultimately don’t really know how an AGI is going to evolve and what it might do.

The last piece of the Venn Diagram and acronym jumble is the idea of ASI – Artificial Superintelligence. This is what will happen when AGI takes over its own evolution and “improves” itself at an exponential rate, rapidly becoming far more intelligent than humans. At this point, speculate the doomsayers, ASI may treat humans the way we treat microorganisms – with complete disregard for our well being and survival.

With these kinds of ideas bantered about, it is no wonder that the media hypes Artificial Intelligence. In the next post, I’ll examine the hype and try to make sense of some of the pesky assumptions.

NEXT: How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

How to Survive an AI Apocalypse – Part 1: Intro

It has certainly been a while since I wrote a blog, much to the consternation of many of my Universe-Solved! Forum members. A few years of upheaval – Covid, career pivots, new home, family matters, writing a new book – it was easy to not find the time. Doesn’t mean the brain hasn’t been working though.

Emerging from the dust of the early 20’s was an old idea, dating back to 1956, but renewed and invigorated by Moore’s Law – Artificial Intelligence. Suddenly in the mainstream of the public psyche, courtesy mostly of ChatGPT, social media was suddenly abuzz with both promising new opportunities as well as fears of Skynet, grey goo, and other apocalyptic scenarios, fueled by AI run amok. I had the pleasure of being asked to contribute to last year’s Contact in the Desert conference and chose as one of my topics “How to Survive an AI Apocalypse.” It’s a little tangential to my usual fare of simulation theory and quantum anomaly explanations, but there turns out to be some very important connections between the concepts.

In this multi-part series, I will give some thought to some of those AI run amok scenarios, examining the nature, history, and assumptions around AI, the recommended alignment protocols, and how it fits with the simulation model, which is rapidly becoming accepted as a highly likely theory of reality. So let’s get started…

Eliezer Yudkowsky is the founder and head of Machine Intelligence Research Institute, in Berkeley, CA. His view on the future of humanity is rather bleak: “The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ … If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”

Or, rather, if you prefer your doomsaying to come from highly distinguished mainstream scientists, there is always Mr. Stephen Hawking: The development of full artificial intelligence could spell the end of the human race. At least he said could, right?

Yikes!

How likely is this, really? And can it be mitigated?

I did a bit of research and found the following suggestions for avoiding such a scenario:

  1. Educate yourself – learn as much as you can about AI technology and its potential implications. Understanding the technology can help you make informed decisions about its use.
  2. Support responsible AI development – choose to support companies and organizations that prioritize responsible AI development and are committed to ethical principles
  3. Advocate for regulation – Advocate for regulatory oversight of AI technology to ensure that it is developed and used in a safe and responsible manner.
  4. Encourage transparency – Support efforts to increase transparency in AI development and deployment, so that the public can have a better understanding of how AI is being used and can hold companies accountable for their actions.
  5. Promote diversity and inclusion – Encourage diversity and inclusion in the development of AI technology to ensure that it reflects the needs and values of all people.
  6. Monitor the impact of AI – Stay informed about the impact of AI technology on society, and speak out against any negative consequences that arise

Mmm, wait a minute, these suggestions were generated by ChatGPT, which is a little like a fish asking a shark which parts of the ocean to stay away from to avoid being eaten by a shark. Maybe that’s not the best advice. Let’s dig a little deeper, and attempt to understand it…

NEXT: How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

How to Survive an AI Apocalypse Series:

Will Evolving Minds Delay The AI Apocalypse? – Part II

The idea of an AI-driven Apocalypse is based on AI outpacing humanity in intelligence. The point at which that might happen depends on how fast AI evolves and how fast (or slow) humanity evolves.

In Part I of this article, I demonstrated how, given current trends in the advancement of Artificial Intelligence, any AI Apocalypse, Singularity, or what have you, is probably much further out that the transhumanists would have you believe.

In this part, we will examine the other half of the argument by considering the nature of the human mind and how it evolves. To do so, it is very instructive to consider the nature of the mind as a complex system and also the systemic nature of the environments that minds and AIs engage with, and are therefore measured by in terms of general intelligence or AGI.

David Snowden has developed a framework of categorizing systems called Cynefin. The four types of systems are:

  1. Simple – e.g. a bicycle. A Simple system is a simple deterministic system characterized by the fact that most anyone can make decisions and solve problems regarding such systems – it takes something called inferential intuition, which we all have. If the bicycle seat is loose, everyone knows that to fix it, you must look under the seat and find the hardware that needs tightening.
  2. Complicated – e.g. a car. Complicated systems are also deterministic systems, but unlike Simple systems, solutions to problems in this domain are not obvious and typically require analysis and/or experts to figure out what is wrong. That’s why you take your car to the mechanic and why we need software engineers to fix defects.
  3. Complex – Complex systems, while perhaps deterministic from a philosophical point of view, are not deterministic in any practical sense. No matter how much analysis you apply and no matter how experienced the expert is, they will not be able to completely analyze and solve a problem in a complex system. That is because such systems are subject to an incredibly complex set of interactions, inputs, dependencies, and feedback paths that all change continuously. So even if you could apply sufficient resources toward analyzing the entire system, by the time you got your result, your problem state would be obsolete. Examples of complex systems include ecosystems, traffic patterns, the stock market, and basically every single human interaction. Complex systems are best addressed through holistic intuition, which is something that humans possess when they are very experienced in the applicable domain. Problems in complex systems are best addressed by a method called Probe-Sense-Respond, which consists of probing (doing an experiment designed intuitively), sensing (observing the results of that experiment), and responding (acting on those results by moving the system in a positive direction).
  4. Chaotic – Chaotic systems are rarely occurring situations that are unpredictable because they are novel and therefore don’t follow any known patterns. An example would be the situation in New York City after 9/11. Responding to chaotic systems requires an even different method than with other types of systems. Typically, just taking some definitive form of action may be enough to move the system from Chaotic to Complex. The choice of action is a deeply intuitive decision that may be based on an incredibly deep, rich, and nuanced set of knowledge and experiences.

Complicated systems are ideal for early AI. Problems like the ones analyzed in Stanford’s AI Index, such as object detection, natural language parsing, language translation, speech recognition, theorem proving, and SAT solving are all Complicated systems. AI technology at the moment is focused mostly on such problems, not things in the Complex domain, which are instead best addressed by the human brain. However, as processing speed evolves, and learning algorithms evolve, AI will start addressing issues in the Complex domain. Initially, to program or guide the AI systems toward a good sense-and-respond model a human mind will be needed. Eventually perhaps, armed with vague instructions like “try intuitive experiments from a large set of creative ideas that may address the issue,” “figure out how to identify the metrics that indicate a positive result from the experiment,” “measure those metrics,” and “choose a course of action that furthers the positive direction of the quality of the system,” an AI may succeed at addressing problems in the Complex domain.

The human mind of course already has a huge head start. We are incredibly adept at seeing vague patterns, sensing the non-obvious, seeing the big picture, and drawing from collective experiences to select experiments to address complex problems.

Back to our original question, as we lead AI toward developing the skills and intuition to replicate such capabilities, will we be unable to evolve our thinking as well?

In the materialist paradigm, the brain is the limit for an evolving mind. This is why we think the AI can out evolve us, because the brain capacity is fixed. However, in “Digital Consciousness” I have presented a tremendous set of evidence that this is incorrect. In actuality, consciousness, and therefore the mind, is not emergent from the brain. Instead it exists in a deeper level of reality as shown in the Figure below.

It interacts with a separate piece of ATTI that I call the Reality Learning Lab (RLL), commonly known as “the reality we live in,” but more accurately described as our “apparent physical reality” – “apparent” because it is actually Virtual.

As discussed in my blog on creating souls, All That There Is (ATTI) has subdivided itself into components of individuated consciousness, each of which has a purpose to evolve. How it is constructed, and how the boundaries are formed that make it individuated is beyond our knowledge (at the moment).

So what then is our mind?

Simply put, it is organized information. As Tom Campbell eloquently expressed it, “The digital world, which subsumes the virtual physical world, consists only of organization – nothing else. Reality is organized bits.”

As such, what prevents it from evolving in the deeper reality of ATTI just as fast as we can evolve an AI here in the virtual reality of RLL?

Answer – NOTHING!

Don’t get hung up on the fixed complexity of the brain. All our brain is needed for is to emulate the processing mechanism that appears to handle sensory input and mental activity. By analogy, we might consider playing a virtual reality game. In this game we have an avatar and we need to interact with other players. Imagine that a key aspect of the game is the ability to throw a spear at a monster or to shoot an enemy. In our (apparent) physical reality, we would need an arm and a hand to be able to carry out that activity. But in the game, it is technically not required. Our avatar could be arm-less and when we have the need to throw something, we simply press a key sequence on the keyboard. A spear magically appears and gets hurled in the direction of the monster. Just as we don’t need a brain to be aware in our waking reality (because our consciousness is separate from RLL), we don’t need an arm to project a spear toward an enemy in the VR game.

On the other hand, having the arm on the avatar adds a great deal to the experience. For one thing, it adds complexity and meaning to the game. Pressing a key sequence does not have a lot of variability and it certainly doesn’t provide the player with much control. The ability to hit the target could be very precise, such as in the case where you simply point at the target and hit the key sequence. This is boring, requires little skill and ultimately provides no opportunity to develop a skill. On the other hand, the precision of your attack could be dependent on a random number generator, which adds complexity and variability to the game, but still doesn’t provide any opportunity to improve. Or, the precision of the attack could depend on some other nuance of the game, like secondary key sequences, or timing of key sequences, which, although providing the opportunity to develop a skill, have nothing to do with a consistent approach to throwing something. So, it is much better to have your avatar have an arm. In addition, this simply models the reality that you know, and people are comfortable with things that are familiar.

So it is with our brains. In our virtual world, the digital template that is our brain is incapable of doing anything in the “simulation” that it isn’t designed to do. The digital simulation that is the RLL must follow the rules of RLL physics much the way a “physics engine” provides the rules of RLL physics for a computer game. And these rules extend to brain function. Imagine if, in the 21st century, we had no scientific explanation for how we process sensory input or make mental decisions because there was no brain in our bodies. Would that be a “reality” that we could believe in? So, in our level of reality that we call waking reality, we need a brain.

But that brain “template” doesn’t limit the ability for our mind to evolve any more than the lack of brain or central nervous system prevents a collection of single celled organisms called a slime mold from actually learning.

In fact, there is some good evidence for the idea that our minds are evolving as rapidly as technology. Spiral Dynamics is a model of the evolution of values and culture that can be applied to individuals, institutions, and all of humanity. The figure below depicts a very high level overview of the stages, or memes, depicted by the model.

Spiral Dynamics

Each of these stages represents a shift in values, culture, and thinking, as compared to the previous. Given that it is the human mind that drives these changes, it is fair to say that the progression models the evolution of the human mind. As can be seen by the timeframes associated with the first appearance of each stage of humanity, this is an exponential progression. In fact, this is the same kind of progression that Transhumanists used to prove exponential progression of technology and AI. This exponential progression of mind would seem to defy the logic that our minds, if based on fixed neurological wiring, are incapable of exponential development.

And so, higher level conscious thought and logic can easily evolve in the human mind in the truer reality, which may very well keep us ahead of the AI that we are creating in our little virtual reality. The trick is in letting go of our limiting assumptions that it cannot be done, and developing protocols for mental evolution.

So, maybe hold off on buying those front row tickets to the Singularity.