How to Survive an AI Apocalypse – Part 1: Intro

It has certainly been a while since I wrote a blog, much to the consternation of many of my Universe-Solved! Forum members. A few years of upheaval – Covid, career pivots, new home, family matters, writing a new book – it was easy to not find the time. Doesn’t mean the brain hasn’t been working though.

Emerging from the dust of the early 20’s was an old idea, dating back to 1956, but renewed and invigorated by Moore’s Law – Artificial Intelligence. Suddenly in the mainstream of the public psyche, courtesy mostly of ChatGPT, social media was suddenly abuzz with both promising new opportunities as well as fears of Skynet, grey goo, and other apocalyptic scenarios, fueled by AI run amok. I had the pleasure of being asked to contribute to last year’s Contact in the Desert conference and chose as one of my topics “How to Survive an AI Apocalypse.” It’s a little tangential to my usual fare of simulation theory and quantum anomaly explanations, but there turns out to be some very important connections between the concepts.

In this multi-part series, I will give some thought to some of those AI run amok scenarios, examining the nature, history, and assumptions around AI, the recommended alignment protocols, and how it fits with the simulation model, which is rapidly becoming accepted as a highly likely theory of reality. So let’s get started…

Eliezer Yudkowsky is the founder and head of Machine Intelligence Research Institute, in Berkeley, CA. His view on the future of humanity is rather bleak: “The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ … If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”

Or, rather, if you prefer your doomsaying to come from highly distinguished mainstream scientists, there is always Mr. Stephen Hawking: The development of full artificial intelligence could spell the end of the human race. At least he said could, right?

Yikes!

How likely is this, really? And can it be mitigated?

I did a bit of research and found the following suggestions for avoiding such a scenario:

  1. Educate yourself – learn as much as you can about AI technology and its potential implications. Understanding the technology can help you make informed decisions about its use.
  2. Support responsible AI development – choose to support companies and organizations that prioritize responsible AI development and are committed to ethical principles
  3. Advocate for regulation – Advocate for regulatory oversight of AI technology to ensure that it is developed and used in a safe and responsible manner.
  4. Encourage transparency – Support efforts to increase transparency in AI development and deployment, so that the public can have a better understanding of how AI is being used and can hold companies accountable for their actions.
  5. Promote diversity and inclusion – Encourage diversity and inclusion in the development of AI technology to ensure that it reflects the needs and values of all people.
  6. Monitor the impact of AI – Stay informed about the impact of AI technology on society, and speak out against any negative consequences that arise

Mmm, wait a minute, these suggestions were generated by ChatGPT, which is a little like a fish asking a shark which parts of the ocean to stay away from to avoid being eaten by a shark. Maybe that’s not the best advice. Let’s dig a little deeper, and attempt to understand it…

NEXT: How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

How to Survive an AI Apocalypse Series:

3 Responses to How to Survive an AI Apocalypse – Part 1: Intro

  1. Pingback: How to Survive an AI Apocalypse – Part 2: Understanding the Enemy | Musings on the Nature of Reality

  2. Pingback: How to Survive an AI Apocalypse – Part 8: Fighting Back | Musings on the Nature of Reality

  3. Pingback: How to Survive an AI Apocalypse – Part 11: Conclusion | Musings on the Nature of Reality

Leave a comment