Will Evolving Minds Delay The AI Apocalypse? – Part II

The idea of an AI-driven Apocalypse is based on AI outpacing humanity in intelligence. The point at which that might happen depends on how fast AI evolves and how fast (or slow) humanity evolves.

In Part I of this article, I demonstrated how, given current trends in the advancement of Artificial Intelligence, any AI Apocalypse, Singularity, or what have you, is probably much further out that the transhumanists would have you believe.

In this part, we will examine the other half of the argument by considering the nature of the human mind and how it evolves. To do so, it is very instructive to consider the nature of the mind as a complex system and also the systemic nature of the environments that minds and AIs engage with, and are therefore measured by in terms of general intelligence or AGI.

David Snowden has developed a framework of categorizing systems called Cynefin. The four types of systems are:

  1. Simple – e.g. a bicycle. A Simple system is a simple deterministic system characterized by the fact that most anyone can make decisions and solve problems regarding such systems – it takes something called inferential intuition, which we all have. If the bicycle seat is loose, everyone knows that to fix it, you must look under the seat and find the hardware that needs tightening.
  2. Complicated – e.g. a car. Complicated systems are also deterministic systems, but unlike Simple systems, solutions to problems in this domain are not obvious and typically require analysis and/or experts to figure out what is wrong. That’s why you take your car to the mechanic and why we need software engineers to fix defects.
  3. Complex – Complex systems, while perhaps deterministic from a philosophical point of view, are not deterministic in any practical sense. No matter how much analysis you apply and no matter how experienced the expert is, they will not be able to completely analyze and solve a problem in a complex system. That is because such systems are subject to an incredibly complex set of interactions, inputs, dependencies, and feedback paths that all change continuously. So even if you could apply sufficient resources toward analyzing the entire system, by the time you got your result, your problem state would be obsolete. Examples of complex systems include ecosystems, traffic patterns, the stock market, and basically every single human interaction. Complex systems are best addressed through holistic intuition, which is something that humans possess when they are very experienced in the applicable domain. Problems in complex systems are best addressed by a method called Probe-Sense-Respond, which consists of probing (doing an experiment designed intuitively), sensing (observing the results of that experiment), and responding (acting on those results by moving the system in a positive direction).
  4. Chaotic – Chaotic systems are rarely occurring situations that are unpredictable because they are novel and therefore don’t follow any known patterns. An example would be the situation in New York City after 9/11. Responding to chaotic systems requires an even different method than with other types of systems. Typically, just taking some definitive form of action may be enough to move the system from Chaotic to Complex. The choice of action is a deeply intuitive decision that may be based on an incredibly deep, rich, and nuanced set of knowledge and experiences.

Complicated systems are ideal for early AI. Problems like the ones analyzed in Stanford’s AI Index, such as object detection, natural language parsing, language translation, speech recognition, theorem proving, and SAT solving are all Complicated systems. AI technology at the moment is focused mostly on such problems, not things in the Complex domain, which are instead best addressed by the human brain. However, as processing speed evolves, and learning algorithms evolve, AI will start addressing issues in the Complex domain. Initially, to program or guide the AI systems toward a good sense-and-respond model a human mind will be needed. Eventually perhaps, armed with vague instructions like “try intuitive experiments from a large set of creative ideas that may address the issue,” “figure out how to identify the metrics that indicate a positive result from the experiment,” “measure those metrics,” and “choose a course of action that furthers the positive direction of the quality of the system,” an AI may succeed at addressing problems in the Complex domain.

The human mind of course already has a huge head start. We are incredibly adept at seeing vague patterns, sensing the non-obvious, seeing the big picture, and drawing from collective experiences to select experiments to address complex problems.

Back to our original question, as we lead AI toward developing the skills and intuition to replicate such capabilities, will we be unable to evolve our thinking as well?

In the materialist paradigm, the brain is the limit for an evolving mind. This is why we think the AI can out evolve us, because the brain capacity is fixed. However, in “Digital Consciousness” I have presented a tremendous set of evidence that this is incorrect. In actuality, consciousness, and therefore the mind, is not emergent from the brain. Instead it exists in a deeper level of reality as shown in the Figure below.

It interacts with a separate piece of ATTI that I call the Reality Learning Lab (RLL), commonly known as “the reality we live in,” but more accurately described as our “apparent physical reality” – “apparent” because it is actually Virtual.

As discussed in my blog on creating souls, All That There Is (ATTI) has subdivided itself into components of individuated consciousness, each of which has a purpose to evolve. How it is constructed, and how the boundaries are formed that make it individuated is beyond our knowledge (at the moment).

So what then is our mind?

Simply put, it is organized information. As Tom Campbell eloquently expressed it, “The digital world, which subsumes the virtual physical world, consists only of organization – nothing else. Reality is organized bits.”

As such, what prevents it from evolving in the deeper reality of ATTI just as fast as we can evolve an AI here in the virtual reality of RLL?

Answer – NOTHING!

Don’t get hung up on the fixed complexity of the brain. All our brain is needed for is to emulate the processing mechanism that appears to handle sensory input and mental activity. By analogy, we might consider playing a virtual reality game. In this game we have an avatar and we need to interact with other players. Imagine that a key aspect of the game is the ability to throw a spear at a monster or to shoot an enemy. In our (apparent) physical reality, we would need an arm and a hand to be able to carry out that activity. But in the game, it is technically not required. Our avatar could be arm-less and when we have the need to throw something, we simply press a key sequence on the keyboard. A spear magically appears and gets hurled in the direction of the monster. Just as we don’t need a brain to be aware in our waking reality (because our consciousness is separate from RLL), we don’t need an arm to project a spear toward an enemy in the VR game.

On the other hand, having the arm on the avatar adds a great deal to the experience. For one thing, it adds complexity and meaning to the game. Pressing a key sequence does not have a lot of variability and it certainly doesn’t provide the player with much control. The ability to hit the target could be very precise, such as in the case where you simply point at the target and hit the key sequence. This is boring, requires little skill and ultimately provides no opportunity to develop a skill. On the other hand, the precision of your attack could be dependent on a random number generator, which adds complexity and variability to the game, but still doesn’t provide any opportunity to improve. Or, the precision of the attack could depend on some other nuance of the game, like secondary key sequences, or timing of key sequences, which, although providing the opportunity to develop a skill, have nothing to do with a consistent approach to throwing something. So, it is much better to have your avatar have an arm. In addition, this simply models the reality that you know, and people are comfortable with things that are familiar.

So it is with our brains. In our virtual world, the digital template that is our brain is incapable of doing anything in the “simulation” that it isn’t designed to do. The digital simulation that is the RLL must follow the rules of RLL physics much the way a “physics engine” provides the rules of RLL physics for a computer game. And these rules extend to brain function. Imagine if, in the 21st century, we had no scientific explanation for how we process sensory input or make mental decisions because there was no brain in our bodies. Would that be a “reality” that we could believe in? So, in our level of reality that we call waking reality, we need a brain.

But that brain “template” doesn’t limit the ability for our mind to evolve any more than the lack of brain or central nervous system prevents a collection of single celled organisms called a slime mold from actually learning.

In fact, there is some good evidence for the idea that our minds are evolving as rapidly as technology. Spiral Dynamics is a model of the evolution of values and culture that can be applied to individuals, institutions, and all of humanity. The figure below depicts a very high level overview of the stages, or memes, depicted by the model.

Spiral Dynamics

Each of these stages represents a shift in values, culture, and thinking, as compared to the previous. Given that it is the human mind that drives these changes, it is fair to say that the progression models the evolution of the human mind. As can be seen by the timeframes associated with the first appearance of each stage of humanity, this is an exponential progression. In fact, this is the same kind of progression that Transhumanists used to prove exponential progression of technology and AI. This exponential progression of mind would seem to defy the logic that our minds, if based on fixed neurological wiring, are incapable of exponential development.

And so, higher level conscious thought and logic can easily evolve in the human mind in the truer reality, which may very well keep us ahead of the AI that we are creating in our little virtual reality. The trick is in letting go of our limiting assumptions that it cannot be done, and developing protocols for mental evolution.

So, maybe hold off on buying those front row tickets to the Singularity.

Will Evolving Minds Delay The AI Apocalypse? – Part I

Stephen Hawking once warned that “the development of full artificial intelligence could spell the end of the human race.” He went on to explain that AI will “take off on its own and redesign itself at an ever-increasing rate,” while “humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” He is certainly not alone in his thinking, as Elon Musk, for example, cautions that “With artificial intelligence we are summoning the demon.”

In fact, this is a common theme not only in Hollywood, but also between two prominent groups of philosophers and futurists.   One point of view is that Artificial General Intelligence (AGI) will become superintelligent and beyond the control of humans, resulting in all sorts of extinction scenarios (think SkyNet or Grey Goo). The (slightly) more optimistic point of view, held by the transhumanists, is that humanity will merge with advanced AI and form superhumans. So, while biological dumb humanity may go the way of the dodo bird, the new form of human-machine hybrid will continue to advance and rule the universe. By the way, this is supposed to happen around 2045, according to Ray Kurzweil in his 2005 book “The Singularity is Near.”

There are actually plenty of logical and philosophical arguments against these ideas, but this blog is going to focus on something different – the nature of the human mind.

The standard theory is that humans cannot evolve their minds particularly quickly due to the assumption that we are limited by the wiring in our brains. AI, on the other hand, has no such limitations and, via recursive self-improvement, will evolve at a runaway exponential rate, making it inevitable to take over humans at some point in terms of intelligence.

But does this even make sense? Let’s examine both assumptions.

The first assumption is that AI advancements will continue at an exponential pace. This is short-sighted IMHO. Most exponential processes run into negative feedback effects that eventually dampen the acceleration. For example, exponential population growth occurs in bacterial colonies until the environment reaches its carrying capacity and then it levels off. We simply don’t know what the “carrying capacity” is of an AI. In an analogous manner, it has to run in some environment, which may run out of memory, power, or other resources at some point. Moore’s Law, the idea that transistor density doubles every two years, has been applied to many other technology advances, such as CPU speed and networking bit rates, and is the cornerstone of the logic behind the Singularity. However, difficulties in heat dissipation have now slowed down the rate of advances in CPU speed, and Moore’s Law no longer applies. Transistor density is also hitting its limit as transistor junctions are now only a few atoms thick. Paul Allen argues, in his article “The Singularity Isn’t Near,” that the kinds of learning required to move AI ahead do not occur at exponential rates, but rather in an irregular and unpredictable manner. As things get more complex, progress tends to slow, an effect he calls the Complexity Brake.

Let’s look at one example. Deep Blue beat Garry Kasparov in a game in 1996, the first time a machine beat a world Chess champion. Google’s AlphaGo beat a grandmaster at Go for the first time in 2016. In those 20 years, there are 10 2-year doubling cycles in Moore’s Law, which would imply that, if AI were advancing exponentially, the “intelligence” needed to beat a Go master is 1000 times more than the intelligence needed to beat a Chess master. Obviously this is ridiculous. While Go is theoretically a more complex game than Chess because it has many more possible moves, an argument could be made that the intellect and mastery required to become the world champion at each game is roughly the same. So, while the advances in processing speed and algorithmic development (Deep Blue used a brute force algorithm, while AlphaGo did more pattern recognition) were substantial between 1996 and 2016, they don’t really show much advance in “intelligence.”

It would also be insightful to examine some real estimates of AI trends. For some well-researched data, consider Stanford University’s AI Index. Created and launched as a project at Stanford University, the AI Index is an “open, not-for-profit project to track activity and progress in AI.” In their 2017 report,  they identify metrics for the progress made in several areas of Artificial Intelligence, such as object detection, natural language parsing, language translation, speech recognition, theorem proving, and SAT solving. For each of the categories for which there is at least 8 years of data, I normalized the AI performance and calculated the improvements over time and averaged the results (note: I was even careful to invert the data – for example, for a pattern recognition algorithm to improve from 90% accuracy to 95%, this is not a 5% improvement; it is actually a 100% improvement in the ability to reject false positives). The chart below shows that AI is not advancing nearly as quickly as Moore’s Law.

Advancing Artificial Intelligence

Figure 1 – Advancing Artificial Intelligence

In fact, the doubling period is about 6 years instead of 2, which would suggest that we need 3 times as long before hitting the Singularity as compared to Kurzweil’s prediction. Since the 2045 projection for the Singularity occurred in 2005, this would say that we wouldn’t really see it until 2125. That’s assuming that we keep pace with the current rate of growth of AI, and don’t even hit Paul Allen’s Complexity Brake. So, chances are it is much further off than that. (As an aside, according to some futurists, Ray does not have a particularly great success rate for his predictions, even ones that are only 10 years out.

But a lot can happen in 120 years. Unexpected, discontinuous jumps in technology can accelerate the process. Social, economic, and political factors can severely slow it down. Recall how in just 10 years in the 1960s, we figured out how to land a man on the moon. Given the rate at which we were advancing our space technology and applying Moore’s Law (which was in effect at that time), it would not have been unreasonable to expect a manned mission to Mars by 1980. In fact Werner von Braun, the leader of the American rocket team, predicted after the moon landing that we would be on Mars in the early 1980s. But in the wake of the Vietnam debacle, public support for additional investment in NASA waned and the entire space program took a drastic turn. Such factors are probably even more impactful to the future of AI than the limitations of Moore’s Law.

The second assumption we need to examine is that the capacity of the human mind is limited by the complexity of the human brain, and is therefore relatively fixed. We will do that in Part II of this article.

Transhumanism and Immortality – 21st Century Snake Oil

Before I start my rant, I recognize that the Transhumanism movement is chock full of cool ideas, many of which make complete sense, even though they are perhaps obvious and inevitable.  The application of science and technology to the betterment of the human body ranges from current practices like prosthetics and Lasik to genetic modification and curing diseases through nanotech.  It is happening and there’s nothing anyone can to to stop it, so enjoy the ride as you uplift your biology to posthumanism.

However, part of the Transhumanist dogma is the idea that we can “live long enough to live forever.”  Live long enough to be able to take advantage of future technologies like genetic manipulation  which could end the aging process and YOU TOO can be immortal!

The problem with this mentality is that we are already immortal!  And there is a reason why our corporeal bodies die.  Simply put, we live our lives in this reality in order to evolve our consciousness, one life instance at a time.  If we didn’t die, our consciousness evolution would come to a grinding halt, as we spend the rest of eternity playing solitaire and standing in line at the buffet.  The “Universe” or “All That There Is” appears to evolve through our collective individuated consciousnesses.  Therefore, deciding to be physically immortal could be the end of the evolution of the Universe itself.  Underlying this unfortunate and misguided direction of Transhumanism is the belief (and, I can’t stress this enough, it is ONLY that – a belief) that it is lights out when we die.  Following the train of logic, if this were true, consciousness only emerges from brain function, we have zero free will, the entire universe is a deterministic machine, and even investigative science doesn’t make sense any more.  So why even bother with Transhumanism if everything is predetermined?  It is logically inconsistent.  Materialism, the denial of the duality of mind and body, is a dogmatic Religion.  Its more vocal adherents (just head on over to the JREF Forum to find these knuckleheads) are as ignorant to the evidence and as blind to what true science is as the most bass-ackward fundamentalist religious zealots.

OK, to be fair, no one can be 100% certain of anything.  But, there is FAR more evidence for consciousness driven reality than for deterministic materialism.  This blog contains a lot of it, as does my first book, “The Universe-Solved!“, with much more in my upcoming book.

The spokesman for transhumanistic immortality is the self-professed “Transcendent Man“, Ray Kurzweil.  Really Ray?  Did you seriously NOT fight the producers of this movie about you to change the title to something a little less self-aggrandizing, like “Modern Messiah”? #LRonHubbard

So I came across this article about the 77 supplements that Ray takes every day.  From the accompanying video clip, he believes that they are already reversing his aging process: “I’m 65. On many biological aging tests I come out a lot younger. I expect to be in my 40s 15 years from now.”

He has been on this regimen for years.  So let’s see how well those supplements are doing.  Picking an objective tool from one of Ray’s own favorite technologies – Artificial Intelligence – the website how-old.net has an AI bot that automatically estimates your age from an uploaded photo.  I took a screen shot from the video clip (Ray is 65 in the clip) and uploaded it:

Ray Kurzweil Age

85!  Uh oh.  Hmmm, maybe the bot overestimates everyone’s age. I’m 10 years younger than Ray.  Let’s see how I fare, using a shot taken the same year at a ski resort – you know, one of those sports Ray says to avoid (Ray also claims that his kids will probably be immortal as long as they don’t take up extreme sports):

JimHowOld

I don’t know if it is the supplements that make Ray look 20 years older than he is, or the extreme skiing that makes me look 13 years younger than I am.  But I’m thinking maybe I’m onto something. [Note: I do realize that the choice of pictures could result in different outcomes.  I just thought it was ironic that the first two that I tried had these results]

Yes, I’m fairly confident that these supplements have some value in improving the function of various organs and benefiting a person’s overall health and well being.  I’m also fairly certain that much of traditional medical community would disagree and point to the lack of rigorous scientific studies supporting these supposed benefits as they always do.  On the whole, I suspect that, on the average, supplements might extend one’s lifetime somewhat.  But I doubt that they will reverse aging.  The human body is far too complex to hope that adding a few organic compounds would be sufficient to modify and synchronize all of the complex cellular and systemic metabolic chemical reactions toward a reversal of the aging process.  Kurzweil is obviously a very bright man who has had a significant entrepreneurial legacy in the high tech world.  However I think he and the rest of the materialist transhumanists are way over their heads on the topic of immortality and our place and purpose in the Universe.

My suggestion, Ray… skip the supplements, skip the self-promotion, skip the Google plugs, drive your goddamn car, and don’t be afraid to be active.  Stick with high tech, leave the evolution of the universe to its own devices, and enjoy the rest of this life.

The Singularity Cometh? Or not?

There is much talk these days about the coming Singularity.  We are about 37 years away, according to Ray Kurzweil.  For some, the prospect is exhilarating – enhanced mental capacity, ability to experience fantasy simulations, immortality.  For others, the specter of the Singularity is frightening – AI’s run amok, all Terminator-like.  Then there are those who question the entire idea.  A lively debate on our forum triggered this post as we contrasted the position of transhumanists (aka cybernetic totalists) and singularity-skeptics.

For example, Jaron Lanier’s “One Half of a Manifesto” published in Wired and edge.org, suggests that our inability to develop advances in software will, at least for now, prevent the Singularity from happening according to the Moore’s Law pace.  One great quote from his demi-manifesto: “Just as some newborn race of superintelligent robots are about to consume all humanity, our dear old species will likely be saved by a Windows crash. The poor robots will linger pathetically, begging us to reboot them, even though they’ll know it would do no good.”  Kurzweil countered with a couple specific examples of successful software advances, such as speech recognition (which is probably due more to algorithm development than software techniques).

I must admit, I am also disheartened by the slow pace of software advances.  Kurzweil is not the only guy on the planet to have spent his career living and breathing software and complex computational systems.  I’ve written my share of gnarly assembly code, neural nets, and trading systems.  But, it seems to be that it takes almost as long to open a Word document, boot up, or render a 3D object on today’s blazingly fast PCs as it did 20 years ago on a machine running at less than 1% of today’s clock rate.  Kurzweil claims that we have simply forgotten: “Jaron has forgotten just how unresponsive, unwieldy, and limited they were.”

So, I wondered, who is right?  Are there objective tests out there?  I found an interesting article in PC World that compared the boot-up time from a 1981 PC to that of a 2001 PC.  Interestingly, the 2001 was over 3 times slower (51 seconds for boot up) than its 20-year predecessor (16 seconds).  My 2007 Thinkpad – over 50 seconds.  Yes, I know that Vista is much more sophisticated than MS-DOS and therefore consumes much more disk and memory and takes that much more time to load.  But really, are those 3D spinning doodads really helping me work better?

Then I found a benchmark comparison on the performance on 6 different Word versions over the years.  Summing 5 typical operations, the fastest version was Word 95 at 3 seconds.  Word 2007 clocked in at 12 seconds (in this test, they all ran on the same machine).

In summary, software has become bloated.  Developers don’t think about performance as much as they used to because memory and CPU speed is cheap.  Instead, the trend in software development is layers of abstraction and frameworks on top of frameworks.  Developers have become increasingly specialized (“I don’t do “Tiles”, I only do “Struts”) and very few get the big picture.

What does this have to do with the Singularity?  Simply this – With some notable exceptions, software development has not even come close to following Moore’s Law in terms of performance or reliability.  Yet, the Singularity predictions depend on it.  So don’t sell your humanity stock anytime soon.

 

Mac Guy, PC Guy