Is LIDA, the Software Bot, Really Conscious?

Researchers from the Cognitive Computing Research Group (CCRG) at the University of Memphis are developing a software bot known as LIDA (Learning Intelligent Distribution Agent) with what they believe to be cognition or conscious processes.  That belief rests on the idea that LIDA is modeled on a software architecture that mirrors what some believe to be the process of consciousness, called GWT, or Global Workspace Theory.  For example, LIDA follows a repetitive looping process that consists of taking in sensory input, writing it to memory, kicking off a process that scans this data store for recognizable events or artifacts, and, if something is recognized, it is broadcast to the global workspace of the system in a similar manner to the GWT model.  Timings are even tuned to more or less match human reaction times and processing delays.

I’m sorry guys, but just because you have designed a system to model the latest theory of how sensory processing works in the brain does not automatically make it conscious.  I could write an Excel macro with forced delays and process flows that resemble GWT.  Would that make my spreadsheet conscious?  I don’t THINK so.  Years ago I wrote a trading program that utilized the brain model du jour, known as neural networks.  Too bad it didn’t learn how to trade successfully, or I would be golfing tomorrow instead of going to work.  The fact is, it was entirely deterministic, as is LIDA, and there is no more reason to suspect that it was conscious than an assembly line at an automobile factory.

Then again, the standard scientific view (at least that held by most neuroscientists and biologists) is that our brain processing is also deterministic, meaning that, given the exact set of circumstances two different times (same state of memories in the brain, same set of external stimuli), the resulting thought process would also be exactly the same.  As such, so they would say, consciousness is nothing more than an artifact of the complexity of our brain.  An artifact?  I’m an ARTIFACT?

Following this reasoning from a logical standpoint, one would have to conclude that every living thing, including bacteria, has consciousness. In that view of the world, it simply doesn’t make sense to assert that there might be some threshold of nervous system complexity, above which an entity is conscious and below which it is not.  It is just a matter of degree and you can only argue about aspects of consciousness in a purely probabilistic sense; e.g. “most cats probably do not ponder their own existence.”  Taking this thought process a step further, one has to conclude that if consciousness is simply a by-product of neural complexity, then a computer that is equivalent to our brains in complexity must also be conscious.  Indeed, this is the position of many technologists who ponder artificial intelligence, and futurists, such as Ray Kurzweil.  And if this is the case, by logical extension, the simplest of electronic circuits is also conscious, in proportion to the degree in which bacteria is conscious in relation to human consciousness.  So, even an electronic circuit known as a flip-flop (or bi-stable multivibrator), which consists of a few transistors and stores a single bit of information, is conscious.  I wonder what it feels like to be a flip-flop?

Evidence abounds that there is more to consciousness than a complex system.  For one particular and very well research data point, check out Pim van Lommel’s book “Consciousness Beyond Life.”  Or my book “The Universe – Solved!”

My guess is that consciousness consists of the combination of a soul and a processing component, like a brain, that allows that soul to experience the world.  This view is very consistent with that of many philosophers, mystics, and shamans throughout history and throughout the world (which confluence of consistent yet independent thought is in itself very striking).  If true, a soul may someday make a decision to occupy a machine of sufficient complexity and design to experience what it is like to be the “soul in a machine”.  When that happens, we can truly say that the bot is conscious.  But it does not make sense to consider consciousness a purely deterministic emergent property.

hal9000_185

WikiLeaks, Denial of Service Attacks, and Nanobot Clouds

The recent firestorm surrounding WikiLeaks reminds me of one of Neal Stephenson’s visions of the future, “Diamond Age,” written back in 1995.  The web was only at its infancy, but Stephenson had already envisioned massive clouds of networked nanobots, some under control of the government, some under control of other entities.  Such nanobot swarms, also known as Utility Fogs, could be made to do pretty much anything; form a sphere of protection, gather information, inspect people and report back to a central server, or be commanded to attack each other.  One swarm under control of one organization may be at war with another swarm under the control of another organization.  That is our future.  Nanoterrorism.

A distributed denial of service attack (DDoS) is a network attack on a particular server or internet node.  It is often carried out by having thousands of computers saturate the target machine with packet requests, making it impossible for the machine to respond to normal HTTP requests, effectively bringing it to its knees, inaccessible on the internet.  The attacks are often coordinated by a central source who takes advantage of networks of already compromised computers (aka zombie computers, usually unknown to their owners) via malware inflections.  On command, these botnets initiate their attack with clever techniques called Smurf attacks, Ping floods, SYN floods, and other scary sounding events.  An entire underground industry has built up around botnets, some of which can number in the millions.  Botnets can be leased by anyone who knows how to access them and has a few hundred dollars.  As a result, an indignant group can launch an attack on, say, the WikiLeaks site.  And, in response, a WikiLeak support group can launch a counter attack on its enemies, like MasterCard, Visa, and PayPal for their plans to terminate service for WikiLeaks.  That is our present.  Cyberterrorism.

Doesn’t it sound a lot like the nanoterrorism envisioned by Stephenson?  Except it is still grounded in the hardware.  As I see it, the equation of the future is:

Nanoterrorism = Cyberterrorism + Microrobotics + Moore’s Law + 20 years.

Can’t wait!

ddos2 nanobots2_185

Jim and Craig Venter Argue over Who is more Synthetic: Synthia or Us?

So Craig Venter created synthetic life.  How cool is that?  I mean, really, this has been sort of a biologists holy grail for as long as I can remember.  Of course, Dr. Venter’s detractors are quick to point out that Synthia, the name given to this synthetic organism, was not really built from scratch, but sort of assembled from sub-living components and injected into a cell where it could replicate.  Either way, it is a huge step in the direction of man-made life forms.  If I were to meet Dr. Venter, the conversation might go something like this:

Jim: So, Dr. Venter, help me understand how man-made your little creation really is.  I’ve read some articles that state that while your achievement is most impressive, the cytoplasm that the genome was transplanted to was not man made.

Craig: True dat, Jim.  But we all need an environment to live in, and a cell is no different.  The organism was certainly man made, even if its environment already existed.

Jim: But wait a minute.  Aren’t we all man-made?  Wasn’t that the message in those sex education classes I took in high school?

Craig: No, the difference is that this is effectively a new species, created synthetically.

Jim: So, how different is that from a clone?  Are they also created synthetically?

Craig: Sort of, but a clone isn’t a new species.

Jim: How about genetically modified organisms then?  New species created synthetically?

Craig: Yes, but they were a modification made to an existing living organism, not a synthetically created one.

Jim: What about that robot that cleans my floor?  Isn’t that a synthetically created organism?

Craig: Well, maybe, in some sense, but can it replicate itself?

Jim: Ah, but that is just a matter of programming.  Factory robots can build cars, why couldn’t they be programmed to build other factory robots?

Craig: That wouldn’t be biological replication, like cell division.

Jim: You mean, just because the robots are made of silicon instead of carbon?  Seems kind of arbitrary to me.

Craig: OK, you’re kind of getting on my nerves, robot-boy.  The point is that this is the first synthetically created biological organism.

Jim: Um, that’s really cool and all, but we can build all kinds of junk with nanotech, including synthetic meat, and little self-replicating machines.

Craig: Neither of which are alive.

Jim: Define alive.

Craig: Well, generally life is anything that exhibits growth, metabolism, motion, reproduction, and homeostasis.

Jim: So, a drone bee isn’t alive because it can’t reproduce?

Craig: Of course, there are exceptions.

Jim: What about fire, crystals, or the earth itself.  All of those exhibit your life-defining properties.  Are they alive?

Craig: Dude, we’re getting way off topic here.  Let’s get back to synthetic organisms.

Jim: OK, let’s take a different tack.  Physicist Paul Davies said that Google is smarter than any human on the planet.  Is Google alive?  What about computer networks that can reconfigure themselves intelligently.

Craig: Those items aren’t really alive because they have to be programmed.

Jim: Yeah, and what’s that little code in Synthia’s DNA?

Craig: Uhhh…

Jim: And how do you know that you aren’t synthetic?  Is it at all possible that your world and all of your perceptions could be completely under programmed control?

Craig: I suppose it could be possible.  But I highly doubt it.

Jim: Doubt based on what? All of your preconceived notions about reality?

Craig: OK, let’s say we are under programmed control.  So what?

Jim: Well, that implies a creator.  Which in turn implies that our bodies are a creation.  Which makes us just as synthetic as Synthia.  The only difference is that you created Synthia, while we might have been created by some highly advanced geek in an other reality.

Craig: Been watching a few Wachowski Brothers movies, Jim?

Jim: Guilty as charged, Craig.

CraigVenterGod

Noise in Gravity Wave Detector may be first experimental evidence of a Programmed Reality

GEO600 is a large gravitational wave detector located in Hanover, Germany.  Designed to be extremely sensitive to fluctuations in gravity, its purpose is to detect gravitational waves from distant cosmic events.  Recently, however, it has been plagued by inexplicable noise or graininess in its measurement results (see article in New Scientist).  Craig Hogan, director of Fermilab’s Center for Particle Astrophysics, thinks that the instrument has reached the limits of spacetime resolution and that this might be proof that we live in a hologram.  Using physicists Leonard Susskind and Gerard ‘t Hooft’s theory that our 3D reality may be a projection of processes encoded on the 2D surface of the boundary of the universe, he points out that, like a common hologram, the graininess of our projection may be at much larger scales than the Planck length (10-35 meters), such as 10-16meters.

Crazy?  Is it any stranger than living in 10 spatial dimensions, living in a space of parallel realities, invisible dark matter all around us, reality that doesn’t exist unless observed, or any of a number of other mind-bending theories that most physicists believe?  In fact, as fans of this website are well aware, such experimental results are no surprise.  Just take a look at the limits of resolution in my Powers of 10 simulation in the Programmed Reality level: Powers of 10.  I arbitrarily picked 10-21 meters, but it could really be any scale where it happens.

If our universe is programmed, however, it is probably done in such a way as to be unobservable for the most part.  Tantalizing clues like GEO600 noise give us all something to speculate about.  But don’t be surprised if the effect goes away when the programmers apply a patch to improve the reality resolution for another few years.

Thanks to my photogenic cat, Scully, for providing an example of grainy reality…
Scully, various resolutions

The Singularity Cometh? Or not?

There is much talk these days about the coming Singularity.  We are about 37 years away, according to Ray Kurzweil.  For some, the prospect is exhilarating – enhanced mental capacity, ability to experience fantasy simulations, immortality.  For others, the specter of the Singularity is frightening – AI’s run amok, all Terminator-like.  Then there are those who question the entire idea.  A lively debate on our forum triggered this post as we contrasted the position of transhumanists (aka cybernetic totalists) and singularity-skeptics.

For example, Jaron Lanier’s “One Half of a Manifesto” published in Wired and edge.org, suggests that our inability to develop advances in software will, at least for now, prevent the Singularity from happening according to the Moore’s Law pace.  One great quote from his demi-manifesto: “Just as some newborn race of superintelligent robots are about to consume all humanity, our dear old species will likely be saved by a Windows crash. The poor robots will linger pathetically, begging us to reboot them, even though they’ll know it would do no good.”  Kurzweil countered with a couple specific examples of successful software advances, such as speech recognition (which is probably due more to algorithm development than software techniques).

I must admit, I am also disheartened by the slow pace of software advances.  Kurzweil is not the only guy on the planet to have spent his career living and breathing software and complex computational systems.  I’ve written my share of gnarly assembly code, neural nets, and trading systems.  But, it seems to be that it takes almost as long to open a Word document, boot up, or render a 3D object on today’s blazingly fast PCs as it did 20 years ago on a machine running at less than 1% of today’s clock rate.  Kurzweil claims that we have simply forgotten: “Jaron has forgotten just how unresponsive, unwieldy, and limited they were.”

So, I wondered, who is right?  Are there objective tests out there?  I found an interesting article in PC World that compared the boot-up time from a 1981 PC to that of a 2001 PC.  Interestingly, the 2001 was over 3 times slower (51 seconds for boot up) than its 20-year predecessor (16 seconds).  My 2007 Thinkpad – over 50 seconds.  Yes, I know that Vista is much more sophisticated than MS-DOS and therefore consumes much more disk and memory and takes that much more time to load.  But really, are those 3D spinning doodads really helping me work better?

Then I found a benchmark comparison on the performance on 6 different Word versions over the years.  Summing 5 typical operations, the fastest version was Word 95 at 3 seconds.  Word 2007 clocked in at 12 seconds (in this test, they all ran on the same machine).

In summary, software has become bloated.  Developers don’t think about performance as much as they used to because memory and CPU speed is cheap.  Instead, the trend in software development is layers of abstraction and frameworks on top of frameworks.  Developers have become increasingly specialized (“I don’t do “Tiles”, I only do “Struts”) and very few get the big picture.

What does this have to do with the Singularity?  Simply this – With some notable exceptions, software development has not even come close to following Moore’s Law in terms of performance or reliability.  Yet, the Singularity predictions depend on it.  So don’t sell your humanity stock anytime soon.

 

Mac Guy, PC Guy

Would it really be that bad to find life in our Solar System?

Nick Bostrom wrote an interesting article for the MIT Technology Review about how he hopes that the search for life on Mars finds nothing. In it, he reasons that inasmuch as we haven’t come across any signs of intelligent life in the universe yet, advanced life must be rare. But since conditions for life aren’t particularly stringent, there must be a “great filter” that prevents life from evolving beyond a certain point. If we are indeed alone, that probably means that we have made it through the filter. But if life is found nearby, like in our solar system, then the filter is probably ahead of us, or at least ahead of the evolutionary stage of the life that we find. And the more advanced the life form that we find, the more likely that we have yet to hit the filter, which implies ultimate doom for us.

But I wonder about some of the assumptions in this argument. He argues that intelligent ETs must not exist because they most certainly should have colonized the galaxy via von Neumann probes but apparently have not done so because we do not observe them. It seems to me, however, that it is certainly plausible that a sufficiently advanced civilization can be effectively cloaked from a far less advanced one. Mastery of some of those other 6 or 7 spatial dimensions that string theory predicts comes to mind. Or invisibility via some form of electromagnetic cloaking. And those are only early 21st century ideas. Imagine the possibilities of being invisible in a couple hundred years.

Then there is the programmed reality model. If the programmers placed multiple species in the galaxy for “players” to inhabit, it would certainly not be hard to keep some from interacting with each other, e.g. until the lesser civilization proves its ability to play nicely. Think about how some virtual reality games allow the players to walk through walls. It is a simple matter to maintain multiple domains of existence in a single programmed construct!  More support for the programmed reality model?…

(what do you think about the possibilities of life elsewhere? take our polls!)

Martian