How to Survive an AI Apocalypse – Part 6: Cultural Demise

PREVIOUS: How to Survive an AI Apocalypse – Part 5: Job Elimination

We are already familiar with the negative consequences of smart phones… text neck, higher stress and anxiety levels, addiction, social isolation, interrupts, reduce attention span, loss of family connections…

AI can lead to further cultural demise – loss of traditional skills, erosion of privacy, reduced human interaction, economic disparity. AI pioneer Jaron Lanier warns of forms of insanity developing – dehumanization due to social media, loss of personal agency, feedback loops that lead to obsession, and algorithms that result in behavior modification such as narrowing of perspective due to filtered news.

We are also seeing the erosion of human relationships as people find more comfort in communicating with chatbots, like Replika (“The AI Companion Who Cares”) that are perfectly tuned toward your desires rather than other humans with messy and inconsistent values. Excessive interactions with such interactive agents has already been shown to lead to reduced interpersonal skills, lack of empathy, escapism, and unreal relationship expectations.

And then there is Harmony.

I’m completely at a loss for words lol.

OK, where does the Demise of Human Culture fit in our growing panoply of AI-Run-Amok scenarios?

I put it right above Job Elimination, because not only is it already underway, it is probably further along than job elimination. 

The good news is that you are almost completely in control of how much cultural degradation AI can have on your own life.

Here are some very practical behavior and lifestyle patterns that can keep cultural demise at bay, at least for yourself:

  • Turn off, throw out, or, at least, reduce reliance on those NLP-based devices that are listening to you – Siri, Alexa, etc. Look things up for yourself, ask your partner what recent movies might be good to watch, set your own timers. This forces you to maintain research skills and just a little bit more interpersonal interaction.
  • Do a Digital Detox once in a while. Maybe every Tuesday, you don’t leave your personal phone anywhere near you. Or start smaller even, like “the phone is shut off during lunch.” Ramp up the detox if it feels good.
  • Read real books. Not that there is anything wrong with Kindle. But the books are tactile, have a feel and a smell, and take up valuable visual space on your bookshelves. They are easier to leaf through (who was this character again?) and, certainly, both real books and ebooks are a huge improvement over the attention-span sucking tidbits that are so easily consumed like crack on the phone.
  • Make your own art. Buy art that other human made. Don’t buy AI-generated movies, books, music, artworks – help the demand side of the supply/demand equation keep the value up for human-generated content.
  • Get out in nature. We are still a long way from an AI’s ability to generate the experience that nature gives us. I once took my step-children out on a nature walk (they were like, “why, what’s the point?”) and we sat on a bench and did something radical. Five minutes, nobody says a word, try to silence the voice in your head, don’t think about anything in the past or the future, don’t think about anything at all, just observe. In the end we each shared what we felt and saw. Not saying it changed their lives, but they got the point, and really appreciated the experience. It’s deep – the connection with nature, it’s primitive, and it is eroding fast.
  • Spend time with humans. More family, more friends, more strangers even. Less social media, less games. Exercise that communication and empathy muscle.
  • Make decisions based on instinct and experience and not on what some blog tells you to do.
  • Meditate. That puts you in touch with a reality so much deeper and real than our apparent waking reality, that it is that much further removed from the cyber world.
  • Be mindful. Pay attention to your activities and decisions and ask yourself “is this going to contribute to the erosion of my humanity?” If the answer is yes, it doesn’t mean it’s wrong, it’s just that you are more aware.

OK, next up, the nightmare scenario that you’ve all been waiting for: ELIMINATION! SkyNet, Hal 9000, The Borg.

NEXT: How to Survive an AI Apocalypse – Part 7: Elimination

How to Survive an AI Apocalypse – Part 2: Understanding the Enemy

PREVIOUS: How to Survive an AI Apocalypse – Part 1: Intro

As I mentioned in the first part of this series, in order to make any kinds of predictions about the future of AI, we must understand what Artificial Intelligence means. Unfortunately, there is so much confusing information out there. LLMs, GPTs, NAIs, AGIs, machine learning – what does it all mean? One expert say AGI will be here by the end of the year; another expert says it will never come.

Here is a simplified Venn diagram that might help to make some sense out of the landscape…

AIs are all computer programs, but, while it might be obvious, not all computer programs are AI. AI refers to programs that emulate human thinking and behavior. So, while your calculator or smart toaster might be doing some limited thinking, it isn’t really trying to be human; it is simply performing a task. AIs are generally considered to be broken into two categories – NAIs (Narrow AI) or AGI (Artificial General Intelligence).

NAIs are the ones we are all familiar with and are typically loosely categorized further: NLPs (Natural Language Processing, like Siri and Alexa, Robotics, Machine Learning (like how Spotify and Netflix learn your tastes and offer suggestions), Deep Learning, and LLMs (Large Language Models). Deep Learning systems emulate human neural networks and can complete tasks with poorly defined data and little human guidance; an example would be AlphaGo. LLMs are neural networks with many parameters (often billions), that are trained on large sets of unlabeled text using self-supervised learning. Generative Pre-trained transformers (GPTs) are a subset of LLMs and are able to generate novel human-like text, images, or even videos. ChatGPT, DALL-E, and Midjourney are examples of GPTs. The following pictures are examples of imagery created by Midjourney for my upcoming book, “Level 5.

AGIs are the ones we need to worry about, because they have a capacity to act like a human, but not really a human. Imagine giving human intelligence to an entity that has A: No implicit sense of morality or values (at least none that would make any sense to us), and B: A completely unpredictable nature. What might happen?

Well, here’s an example…

Oh, that would never happen, right? Read on…

There are thousands of examples of AIs “thinking” creatively – more creatively in fact than their creators ever imagined. Pages and pages of specification gaming examples have been logged. These are cases where the AI “gets around” the programming limitations that were imposed by the creators of the system. A small sample set is shown below:

Another example of the spontaneous emergence of intelligence involves what are known as Theory of Mind tasks. These are cognitive developments in children that reflect the understanding of other people’s mental processes. As the research in the adjacent figure demonstrates, various GPTs have unexpectedly developed such capabilities; in fact, what typically takes humans 9 years to learn has taken the AIs only 3.

These unexpected spontaneous bursts of apparent intelligence are interesting, but as we will see, they aren’t really intelligence per se. Not that it matters, if what you are worried about are the doomsday scenarios. The mere fact that they are unpredictable or non-deterministic is exactly what is frightening. So how does that happen?

There are multiple mechanisms for these spontaneous changes in intelligence. One is the Neural Net. Neural nets, while ultimately deterministic deep down, are “apparently” non-deterministic because they are not based on any programming rules. If sufficiently complex and with feedback, they are impossible to predict, at least by humans.

As shown, they consist of some input nodes and output nodes, but contain hidden layers of combinatorial arithmetic operations, which makes them nearly impossible to predict. I programmed neural nets many years ago, in an attempt to outsmart the stock market. I gave up when they didn’t do what I wanted and moved on to other ideas (I’m still searching).

Another unpredictability mechanism is the fact that not only can AIs write software very well (DeepMinds AlphaCode outperformed 47% of all human developers in 2022), they can rewrite their own software. So, blending the unpredictable nature of neural nets, the clever specification gaming capabilities that AIs have demonstrated, and their ability to rewrite their own code, we ultimately don’t really know how an AGI is going to evolve and what it might do.

The last piece of the Venn Diagram and acronym jumble is the idea of ASI – Artificial Superintelligence. This is what will happen when AGI takes over its own evolution and “improves” itself at an exponential rate, rapidly becoming far more intelligent than humans. At this point, speculate the doomsayers, ASI may treat humans the way we treat microorganisms – with complete disregard for our well being and survival.

With these kinds of ideas bantered about, it is no wonder that the media hypes Artificial Intelligence. In the next post, I’ll examine the hype and try to make sense of some of the pesky assumptions.

NEXT: How to Survive an AI Apocalypse – Part 3: How Real is the Hype?

And I thought Nanobots Were Way off in the Future

Scientists from the International Center for Young Scientists have developed a rudimentary nano-scale molecular machine that is capable of generating the logical state machine necessary to direct and control other nano-machines.  This experiment demonstrates a nascent ability to manipulate, build, and control nano-devices, which are the fundamental premises for nanobot technology.  Other than perfecting these techniques, all that remains to achieve the so-called utility nanobot is the generation of light, wireless networking, and the ability to fly.

Harvard Microrobotics Laboratory developed a 3 cm 60 milligram robotic fly that had its first successful flight in 2007.  So, it seems that Moore’s law marches on in the world of microbiotics at a doubling of the miniaturization of flying robots every two years.  At this rate, we should get to 10 microns by the year 2030.  This is, of course, ignoring the fact that black ops military programs are generally considered to be at least 10 years ahead of commercial ventures.  Bring on the nano-wars!

More on Nanotech and the Physical Manifestation of a Reality

Nano DNA Robotic Fly