The Power of Intuition in the Age of Uncertainty

Have you ever considered why it is that you decide some of the things that you do?

Like how to divide your time across the multiple projects that you have at work, when to discipline your kids, what to do on vacation, who to marry, what college to attend, which car to buy?

The ridiculously slow way to figure these things out is to do an exhaustive analysis on all of the options, potential outcomes and probabilities.  This can be extremely difficult when the parameters of the analysis are constantly changing, as is often the case.  Such analysis is making use of your conscious mind.

The other option is to use your subconscious mind and make a quick intuitive decision.

We who have been educated in the West, and especially those of us who received our training in engineering or the sciences, are conditioned to believe that “analysis” represents rigorous logical scientific thinking and “intuition” represents new age claptrap or occasional maternal wisdom.  Analysis good, intuition silly.

This view is quite inaccurate.

According to Gary Klein, ex-Marine, psychologist, and author of the book “The Power of Intuition: How to Use Your Gut Feelings to Make Better Decisions at Work,” 90% of the critical decisions that we make are made by intuition in any case.  Intuition can actually be a far more accurate and certainly faster way to make an important decision.  Here’s why…

Consider the mind to be composed of two parts – conscious and subconscious.  Admittedly, this division may be somewhat arbitrary, but it is also realistic.

The conscious mind is that part of the mind that deals with your current awareness (sensations, perceptions, memories, feelings, fantasies, etc.)  Research shows that the information processing rate of the conscious mind is actually very low.  Tor Nørretranders, author of “The User Illusion”, estimates the rate at only 16 bits per second.  Dr. Timothy Wilson from the University of Virginia estimates the conscious mind’s processing capacity to be little higher at 40 bits per second.  In terms of the number of items that can be retained at one time by the conscious mind, estimates vary from 4 – 7, with the lower number being reported in a 2008 study by the National Academy of Sciences.

Contrast that with the subconscious mind, which is responsible for all sorts of things: autonomous functions, subliminal perceptions (all of that data streaming in to your five sensory interfaces that you barely notice), implicit thought, implicit learning, automatic skills, association, implicit memory, and automatic processing.  Much of this can be combined into what we consider “intuition.”  Estimates for the information processing capacity and storage capacity of the subconscious mind vary widely, but they are all orders of magnitude larger than their conscious counterparts.  Dr. Bruce Lipton, in “The Biology of Belief,” notes that the processing rate is at least 20 Mbits/sec and maybe as high as 400 Gbits/sec.  Estimates for storage capacity is as high as 2.5 petabytes, or 2,500,000,000,000,000.

Isn’t it interesting that the rigorous analysis that we are so proud of is effectively done on a processing system that is excruciatingly slow and has little memory capacity?

Whereas, intuition is effectively done on a processing system that is blazingly fast and contains an unimaginable amount of data. (Note: as an aside, I might mention that there is actually significant evidence that the subconscious mind connects with powerful data and processing elements outside of the brain, which only serves to underscore the message of this post)

Kind of gives you a little more respect for intuition, doesn’t it?

In fact, that’s what intuition is – the same analysis that you might consider doing consciously, but doing it instead with access to far more data, such as your entire wealth of experience, and the entire set of knowledge to which you have ever been exposed.

Sounds great, right?  It might be a skill that could be very useful to hone, if possible.

But the importance of intuition only grows exponentially as time goes on.  Here’s why…

Eddie Obeng is the Professor at the School of Entrepreneurship and Innovation, HenleyBusinessSchool, in the UK.  He gave a TED talk which nicely captured the essence of our times, in terms of information overload.  The following chart from that talk demonstrates what we all know and feel is happening to us:

Image

The horizontal axis is time, with “now” being all the way to the right.  The vertical axis depicts information rate.

The green curve represents the rate at which we humans can absorb information, aka “learn.”  It doesn’t change much over time, because our biology stays pretty much the same.

The red curve represents the rate at which information is coming at us.

Clearly, there was a time in the past, where we had the luxury of being able to take the necessary time to absorb all of the information necessary to understand the task, or project at hand.  If you are over 40, you probably remember working in such an environment.  At some point, however, the incoming data rate exceeded our capacity to absorb it.  TV news with two or three rolling tickers, tabloids, zillions of web sites to scan, Facebook posts, tweets, texts, blogs, social networks, information repositories, big data, etc.  For some of us, it happened a while ago, for others; more recently.  I’m sure there are still some folks who live  simpler lives on farms in rural areas that haven’t passed the threshold yet.  But they aren’t reading this blog.  As for the rest of us…

It is easy to see that as time goes on, the ratio of unprocessed incoming information to human learning capacity grows exponentially.  What this means is that there is increasingly more uncertainty in our world, because we just don’t have the ability to absorb the information needed to be “certain”, like we used to.  Some call it “The Age of Uncertainty.”  Some refer to the need to be “comfortable with ambiguity.”

This is a true paradigm shift.  A “megatrend.”   It demands entirely new ways of doing business, of structuring companies, of planning, of living.  In my “day job”, I help companies come to terms with these changes by implementing agile and lean processes, structures, and frameworks in order for them to be more adaptable to the constantly changing environment.  But this affects all of us, not just companies.  How do we cope?

One part to the answer is to embrace intuition.  We don’t have time to use the limited conscious mind apparatus to do rigorous analysis to solve our problems anymore.  As time goes on, that method becomes less and less effective.  But perhaps we can make better use of that powerful subconscious mind apparatus by paying more attention to our intuition.  It seems to be what some of our most successful scientists, entrepreneurs, and financial wizards are doing:

George Soros said: “My [trading] decisions are really made using a combination of theory and instinct. If you like, you may call it intuition.”

Albert Einstein said: “The intellect has little to do on the road to discovery. There comes a leap in consciousness, call it intuition or what you will, and the solution comes to you, and you don’t know how or why.”  He also said: “The only real valuable thing is intuition.”

Steve Jobs said: “Don’t let the noise of others’ opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition.”

So how do the rest of us start paying more attention to our intuition?  Here are some ideas:

  • Have positive intent and an open mind
  • Go with first thing that comes to mind
  • Notice impressions, connections, coincidences (a journal or buddy may help)
  • Put yourself in situations where you gain more experience about the desired subject(s)
  • 2-column exercises
  • Meditate / develop point-focus
  • Visualize success
  • Follow your path

I am doing much of this and finding it very valuable.

Things We Can’t Feel – The Mystery Deepens

In my last blog “Things We Can’t See”, we explored the many different ways that our eyes, brains, and/or technology can fool us into seeing something that isn’t there or not seeing something that is.

So apparently, our sense of sight is not necessarily the most reliable sense in terms of identifying what is and isn’t in our objective reality.  We would probably suspect that our sense of touch is fairly foolproof; that is, if an object is “there”, we can “feel” it, right?

Not so fast.

First of all, we have a lot of the same problems with the brain as we did with the sense of sight.  The brain processes all of that sensory data from our nerve endings.  How do we know what the brain really does with that information?  Research shows that sometimes your brain can think that you are touching something that you aren’t or vice versa.  People who have lost limbs still have sensations in their missing extremities.  Hypnosis has been shown to have a significant effect in terms of pain control, which seems to indicate the mind’s capacity to override one’s tactile senses.  And virtual reality experiments have demonstrated the ability for the mind to be fooled into feeling something that isn’t there.

In addition, technology can be made to create havoc with our sense of touch, although the most dramatic of such effects are dozens of years into the future.  Let me explain…

Computer Scientist J. Storrs Hall developed the concept of a “Utility Fog.”  Imagine a “nanoscopic” object called a Foglet, which is an intelligent nanobot, capable of communicating with its peers and having arms that can hook together to form larger structures.  Trillions of these Foglets could conceivably fill a room and not be at all noticeable as long as they were in “invisible mode.”  In fact, not only might they be programmed to appear transparent to the sight, but they may be imperceptible to the touch.  This is not hard to imagine, if you allow that they could have sensors that detect your presence.  For example, if you punch your fist into a swarm of nanobots programmed to be imperceptible, they would sense your motion and move aside as you swung your fist through the air.  But at any point, they could conspire to form a structure – an impenetrable wall, for example.  And then your fist would be well aware of their existence.  In this way, technology may be able to have a dramatic effect on our complete ability to determine what is really “there.”

nanobot

But even now, long before nanobot swarms are possible, the mystery really begins, as we have to dive deeply into what is meant by “feeling” something.

Feeling is the result of a part of our body coming in contact with another object.  That contact is “felt” by the interaction between the molecules of the body and the molecules of the object.

Even solid objects are mostly empty space.  If subatomic particles, such as neutrons, are made of solid mass, like little billiard balls, then 99.999999999999% of normal matter would still be empty space.  That is, of course, unless those particles themselves are not really solid matter, in which case, even more of space is truly empty, more about which in a bit.

So why don’t solid objects like your fist slide right through other solid objects like bricks?  Because of the repulsive effect that the electromagnetic force from the electrons in the fist apply against the electromagnetic force from the electrons in the brick.

But what about that neutron?  What is it made of?  Is it solid?  Is it made of the same stuff as all other subatomic particles?

The leading theories of matter do not favor the idea that subatomic particles are like little billiard balls of differing masses.  For example, string theorists speculate that all particles are made of the same stuff; namely, vibrating bits of string.  Except that they each vibrate at different frequencies.  Problem is, string theory is purely theoretical and really falls more in the mathematical domain than the scientific domain, inasmuch as there is no supporting evidence for the theory.  If it does turn out to be true, even the neutron is mostly empty space because the string is supposedly one-dimensional, with a theoretical cross section of a Planck length.

Here’s where it gets really interesting…

Neutrinos are an extremely common yet extremely elusive particle of matter.  About 100 trillion neutrinos generated in the sun pass through our bodies every second.  Yet they barely interact at all with ordinary matter.  Neutrino capture experiments consist of configurations such as a huge underground tank containing 100,000 gallons of tetrachloroethylene buried nearly a mile below the surface of the earth.  100 billion neutrinos strike every square centimeter of the tank per second.  Yet, any particular molecule of tetrachloroethylene is likely to interact with a neutrino only once every 10E36 seconds (which is 10 billion billion times the age of the universe).

The argument usually given for the neutrino’s elusiveness is that they are massless (and therefore not easily captured by a nucleus) and charge-less (and therefore not subject to the electromagnetic force).  Then again, photons are massless and charge-less and are easily captured, to which anyone who has spent too much time in the sun can attest.  So there has to be some other reason that we can’t detect neutrinos.  Unfortunately, given the current understanding of particle physics, no good answer is forthcoming.

And then there is dark matter.  This concept is the current favorite explanation for some anomalies around orbital speeds of galaxies.  Gravity can’t explain the anomalies, so dark matter is inferred.  If it really exists, it represents about 83% of the mass in the universe, but doesn’t interact again with any of the known forces with the exception of gravity.  This means that dark matter is all around us; we just can’t see it or feel it.

So it seems that modern physics allows for all sorts of types of matter that we can’t see or feel.  When you get down to it, the reason for this is that we don’t understand what matter is at all.  According to the standard model of physics, particles should have no mass, unless there is a special quantum field that pervades the universe and gives rise to mass upon interacting with those particles.  Unfortunately, for that to have any credibility, the signature particle, the Higgs boson, would have to exist.  Thus far, it seems to be eluding even the most powerful of particle colliders.  One alternative theory of matter has it being an emergent property of particle fluctuations in the quantum vacuum.

For a variety of reasons, some of which are outlined in “The Universe – Solved!” and many others which have come to light since I wrote that book, I suspect that ultimately matter is simply a property of an entity that is described purely by data and a set of rules, driven by a complex computational mechanism.  Our attempt to discover the nature of matter is synonymous with our attempt to discover those rules and associated fundamental constants (data).

In terms of other things that we can’t perceive, new age enthusiasts might call out ghosts, spirits, auras, and all sorts of other mysterious invisible and tenuous entities.

starwarsghosts

Given that we know that things exist that we can’t perceive, one has to wonder if it might be possible for macroscopic objects, or even macroscopic entities that are driven by similar energies as humans, to be made from stuff that we can only tenuously detect, not unlike neutrinos or dark matter.  Scientists speculate about multiple dimensions and parallel universes via Hilbert Space and other such constructs.  If such things exist (and wouldn’t it be hypocritical of anyone to speculate or work out the math for such things if it weren’t possible for them to exist?), the rules that govern our interaction with them, across the dimensions, are clearly not at all understood.  That doesn’t mean that they aren’t possible.

In fact, the scientific world is filled with trends leading toward the implication of an information-based reality.

In which almost anything is possible.

Things We Can’t See

When you think about it, there is a great deal out there that we can’t see.

Our eyes only respond to a very narrow range of electromagnetic radiation.  The following diagram demonstrates just how narrow our range of vision compared to the overall electromagnetic spectrum.

em_spectrum

So we can’t see anything that generates or reflects wavelengths equal to or longer than infrared, as the following image demonstrates.  Even the Hubble Space Telescope can’t see the distant infrared galaxy that the Spitzer Space Telescope can see with its infrared sensors.

(http://9-4fordham.wikispaces.com/Electro+Magnetic+Spectrum+and+light)

600px-Distant_Galaxy_in_Visible_and_Infrared

And we can’t see anything that generates or reflects wavelengths equal to or shorter than ultraviolet, as the image from NASA demonstrates at left. Only instruments with special sensors that can detect ultraviolet or x-rays can see some of the objects in the sky.

Of course, we can’t see things that are smaller in size than about 40 microns, which includes germs and molecules.

 

 

We can’t see things that are camouflaged by technology, such as the Mercedes in the following picture.

invisiblemercedes

Sometimes, it isn’t our eyes that can’t sense something that is right in front of us, but rather, our brain.  We actually stare at our noses all day long but don’t notice because our brains effectively subtract it out from our perception, given that we don’t really need it.  Our brains also fill in the imagery that is missing from the blind spot that we all have due to the optic nerve in our retinas.

In addition to these limitations of static perception, there are significant limitations to how we perceive motion.  It actually does not take much in terms of speed to render something invisible to our perception.

Clearly, we can’t see something zip by as fast as a bullet, which might typically move at speeds of 700 mph or more.  And yet, a plane moving at 700 mph is easy to see from a distance.  Our limitations of motion perception are a function of the speed of the object and the size of the image that it casts upon your retina; e.g. for a given speed, the further away something is, the larger it has to be to register in our conscious perception.  This is because our perception of reality refreshes no more than 13-15 times per second, or every 77 ms. So, if something is moving so fast that it passes by our frame of perception in less than 77 ms or so, or it is so small that it doesn’t make a significant impression in our conscious perception within that time period, we simply won’t be aware of its existence.

It makes one wonder what kinds of things may be in our presence, but moving too quickly to be observed.  Some researchers have captured objects on high-speed cameras, for which there appears to be no natural explanation.  For example, there is this strange object captured on official NBC video at an NFL football game in 2011:  Whether these objects have mundane explanations or might be hints of something a little more exotic, one thing is for certain: our eye cannot capture them.  They are effectively invisible to us, yet exist in our reality.

In my next blog we will dive down the rabbit hole and explore the real possibilities that things exist around us that we can’t even touch.

Yesterday’s Sci-Fi is Tomorrow’s Technology

It is the end of 2011 and it has been an exciting year for science and technology.  Announcements about artificial life, earthlike worlds, faster-than-light particles, clones, teleportation, memory implants, and tractor beams have captured our imagination.  Most of these things would have been unthinkable just 30 years ago.

So, what better way to close out the year than to take stock of yesterday’s science fiction in light of today’s reality and tomorrow’s technology.  Here is my take:

yesterdaysscifi

Smart Phones as Transformative Devices

I live in Southern California, where, at any point in time, about 1 out of every 2 people are staring at their phone.  As a long time iPhone owner, I have to admit that I also fall into that category.  Smart phones are simply so enticing – camera, stock ticker, weather forecast, stored music, videos, and photos, GPS, maps, email, texting, twitter, facebook, games, radio rebroadcasts, internet, newpapers, webcams, and so much more.  What’s not to love?

The internet is often hailed as a transformative invention, which it certainly was.  But it kind of pales in comparison to that Droid in your pocket.  After all, the smart phone includes the internet at your fingertips, which, by itself is transformative in how people interact.  Instead of having to call your buddy after you get home and look up the factoid that you argued about at the bar, now you can settle immediately.  But, as the web app is just one of the thousands of apps that can be stored on the phone, it stands to reason that transformative nature of the smart phone can be much more than the web.

For one thing, there is the impact on existing products and services.  Who needs GPS anymore, when you have an iPhone?  Who needs to hear terrestrial radio stations in your car when you can stream Pandora channels tailored to your interests.  Pagers? – a thing of the past.  With all of the market data available at your fingertips and mobile trading easily accessible, do we need the financial section of the newspaper any more?  Or stockbrokers?  While consulting at a large toy manufacturer recently, it was observed that people use smart phones to comparison shop on the fly.  You’re standing in front of a camera at an electronic superstore and in seconds you can determine if their competitor sells it cheaper.  Macy’s doesn’t have your size of that perfect shirt you found in the store?  Check online and find out who does.  I’m less inclined to stay at home to watch a game when I know I can keep track of my team at any time.  Don’t need to carry a pen to write anything down when I can take notes on my phone.  Shazam has saved me tons of time trying to figure out what that song was that I just heard on the radio.

But it’s not all good.

How many deaths are attributed to texting and driving?  Reuters estimates over 2000 per year and growing.  Celebrity plastic surgeon Dr. Frank Ryan drove off the Pacific Coast Highway while texting about his dog last year.

Still, these are all relatively small impacts to our society.  The real transformation is in terms of socialization.  At a glance, you can determine who among your friends is nearby where you are dining or drinking, potentially enabling slightly higher socialization.  But, to come back to my original point, what about all of those people starting at their phones all day?  If you are at a restaurant with your family or friends, but are obsessed with twittering, you aren’t really getting much out of the social outing.  When was the last time you made eye contact with someone walking down the street?  It’s kind of difficult if one or both people are staring at the device in their hand.  Maybe you just walked past the person that could become the love of your life.  You’ll never know it.  Maybe you just passed a former colleague who knows of the perfect new job for you.  Opportunity missed.  I even think that people are losing the ability to think.  Some of the best daydreaming, the best brainstorms, occur when you are out and about and simply thinking.  That doesn’t happen much anymore.  Standing at the curb waiting for the walk sign?  Might as well check email.  Waiting for an elevator?  Might as well see what’s going on on Facebook.  Sitting at a stoplight?  Might as well see if anyone responded to my last tweet.

We are doing less reading, more microblogging.  Less thinking, more context switching.  One has to assume that this will impact ideas, innovation, creativity.

Don’t get me wrong.  The last thing I am is a Luddite.  I embrace technology, I love technology.  For $10 I can download a Groovebox app for my iPad, the equivalent of which used to cost $600 and take up rack space.  I can’t wait to “goggle in” in “Snow Crash” parlance, and experience other realities.  But I also can’t help but wonder what we have lost whenever I watch two people crossing a street collide mid-intersection because they are both texting.

Oops, got a text message, gotta run…

paristextanddrive185

Things We Can Never Comprehend

Have you ever wondered what we don’t know?  Or, to put it another way, how many mysteries of the universe are still to be discovered?

To take this thought a step further, have you ever considered that there may be things that we CAN’T understand, no matter how hard we try?

This idea may be shocking to some, especially to those scientists who believe that we are nearing the “Grand Unified Theory”, or “Theory of Everything” that will provide a simple and elegant solution to all forces, particles, and concepts in science.  Throughout history, the brightest of minds have been predicting the end of scientific inquiry.  In 1871, James Clerk Maxwell lamented the sentiment of the day which he represented by the statement “in a few years, all great physical constants will have been approximately estimated, and that the only occupation which will be left to men of science will be to carry these measurements to another place of decimals.”

Yet, why does it always seem like the closer we get to the answers, the more monkey wrenches get thrown in the way?  In today’s world, these include strange particles that don’t fit the model.  And dark matter.  And unusual gravitational aberrations in distant galaxies.

Perhaps we need a dose of humility.  Perhaps the universe, or multiverse, or whatever term is being used these days to denote “everything that is out there” is just too far beyond our intellectual capacity.  Before you call me out on this heretical thought, consider…

The UK’s Astronomer Royal Sir Martin Rees points out that “a chimpanzee can’t understand quantum mechanics.”  Despite the fact that Richard Feynman claimed that nobody understands quantum mechanics, as Michael Brooks points out in his recent article “The limits of knowledge: Things we’ll never understand”, no matter how hard they might try, the comprehension of something like Quantum Mechanics is simply beyond the capacity of certain species of animals.  Faced with this realization and the fact that anthropologists estimate that the most recent common ancestor of both humans and chimps (aka CHLCA) was about 6 million years ago, we can draw a startling conclusion:

There are certainly things about our universe and reality that are completely beyond our ability to comprehend!

My reasoning is as follows. Chimps are certainly at least more intelligent than the CHLCA; otherwise evolution would be working in reverse.  As an upper bound of intelligence, let’s say that CHLCA and chimps are equivalent.  Then, CHLCA was certainly not able to comprehend QM (nor relativity, nor even Newtonian physics), but upon evolving into humans over 8 million years, our new species was able to comprehend these things.  8 million years represents 0.06% of the entire age of the universe (according to what we think we know).  That means that for 99.94% of the total time that the universe and life was evolving up to the current point in time, the most advanced creature on earth was incapable of understand the most rudimentary concepts about the workings of reality and the universe.  And yet, are we to suppose that in the last 0.06% of the time, a species has evolved that can understand everything?  I’m sure you see how unlikely that is.

What if our universe was intelligently designed?  The same argument would probably hold.  For some entity to be capable of creating a universe that continues to baffle us no matter how much we think we understand, that entity must be far beyond our intelligence, and therefore has utilized, in the design, concepts that we can’t hope to understand.

Our only chance for being supremely capable of understanding our world would lie in the programmed reality model.  If the creator of our simulation was us, or even an entity a little more advanced than us, it could lead us along a path of exploration and knowledge discovery that just always seems to be on slightly beyond our grasp.  Doesn’t that idea feel familiar?

chimpscratching185 humanscratching185

Is LIDA, the Software Bot, Really Conscious?

Researchers from the Cognitive Computing Research Group (CCRG) at the University of Memphis are developing a software bot known as LIDA (Learning Intelligent Distribution Agent) with what they believe to be cognition or conscious processes.  That belief rests on the idea that LIDA is modeled on a software architecture that mirrors what some believe to be the process of consciousness, called GWT, or Global Workspace Theory.  For example, LIDA follows a repetitive looping process that consists of taking in sensory input, writing it to memory, kicking off a process that scans this data store for recognizable events or artifacts, and, if something is recognized, it is broadcast to the global workspace of the system in a similar manner to the GWT model.  Timings are even tuned to more or less match human reaction times and processing delays.

I’m sorry guys, but just because you have designed a system to model the latest theory of how sensory processing works in the brain does not automatically make it conscious.  I could write an Excel macro with forced delays and process flows that resemble GWT.  Would that make my spreadsheet conscious?  I don’t THINK so.  Years ago I wrote a trading program that utilized the brain model du jour, known as neural networks.  Too bad it didn’t learn how to trade successfully, or I would be golfing tomorrow instead of going to work.  The fact is, it was entirely deterministic, as is LIDA, and there is no more reason to suspect that it was conscious than an assembly line at an automobile factory.

Then again, the standard scientific view (at least that held by most neuroscientists and biologists) is that our brain processing is also deterministic, meaning that, given the exact set of circumstances two different times (same state of memories in the brain, same set of external stimuli), the resulting thought process would also be exactly the same.  As such, so they would say, consciousness is nothing more than an artifact of the complexity of our brain.  An artifact?  I’m an ARTIFACT?

Following this reasoning from a logical standpoint, one would have to conclude that every living thing, including bacteria, has consciousness. In that view of the world, it simply doesn’t make sense to assert that there might be some threshold of nervous system complexity, above which an entity is conscious and below which it is not.  It is just a matter of degree and you can only argue about aspects of consciousness in a purely probabilistic sense; e.g. “most cats probably do not ponder their own existence.”  Taking this thought process a step further, one has to conclude that if consciousness is simply a by-product of neural complexity, then a computer that is equivalent to our brains in complexity must also be conscious.  Indeed, this is the position of many technologists who ponder artificial intelligence, and futurists, such as Ray Kurzweil.  And if this is the case, by logical extension, the simplest of electronic circuits is also conscious, in proportion to the degree in which bacteria is conscious in relation to human consciousness.  So, even an electronic circuit known as a flip-flop (or bi-stable multivibrator), which consists of a few transistors and stores a single bit of information, is conscious.  I wonder what it feels like to be a flip-flop?

Evidence abounds that there is more to consciousness than a complex system.  For one particular and very well research data point, check out Pim van Lommel’s book “Consciousness Beyond Life.”  Or my book “The Universe – Solved!”

My guess is that consciousness consists of the combination of a soul and a processing component, like a brain, that allows that soul to experience the world.  This view is very consistent with that of many philosophers, mystics, and shamans throughout history and throughout the world (which confluence of consistent yet independent thought is in itself very striking).  If true, a soul may someday make a decision to occupy a machine of sufficient complexity and design to experience what it is like to be the “soul in a machine”.  When that happens, we can truly say that the bot is conscious.  But it does not make sense to consider consciousness a purely deterministic emergent property.

hal9000_185

Cold Fusion Heats Up

People generally associate the idea of cold fusion with electrochemists Stanley Pons and Martin Fleischmann.  However, similar experiments to the ones that led to their momentous announcement and equally momentous downfall were reported as far back as the 1920s.  Austrian scientists Friedrich Paneth and Kurt Peters reported the fusion of hydrogen into helium via a palladium mesh.  Around the same time, Swedish scientist J. Tandberg announced the same results from an elecrolysis experiment using hydrogen and palladium.

Apparently, everyone forgot about those experiments when in 1989, Stanley Pons and Martin Fleischmann from the University of Utah astonished the world with their announcement of a cold fusion experimental result.  Prior to this it was considered impossible to generate a nuclear fusion reaction at anything less than the temperatures found at the core of the sun.  Standard nuclear reaction equations required temperatures in the millions of degrees to generate the energy needed to fuse light atomic nuclei together into heavier elements, in the process releasing more energy than went into the reaction.  Pons and Fleischmann, however, claimed to generate nuclear reactions at room temperatures via a reaction that generate excess energy from an electrolysis reaction with heavy water (deuterium) and palladium, similar to those in the 1920s.

When subsequent experiments initially failed to reproduce their results, they were ridiculed by the scientific community, even to the point of driving them to leave their jobs and their country, and continuing their research in France.  But, since then, despite the fact that the cultish skeptic community declared that no one was able to repeat their experiment, nearly 15,000 similar experiments have been conducted, most of which have replicated cold fusion, including those done by scientists from Oak Ridge National Laboratory and the Russian Academy of Science.

According to a 50-page report on the recent state of cold fusion by Steven Krivit and Nadine Winocur, the effect has been reproduced at a rate of 83%.  “Experimenters in Japan, Romania, the United States, and Russia have reported a reproducibility rate of 100 percent.” (Plotkin, Marc J. “Cold Fusion Heating Up — Pending Review by U.S. Department of Energy.” Pure Energy Systems News Service, 27 March, 2004.)  In 2005, table top cold fusion was reported at UCLA utilizing crystals and deuterium and confirmed by Rensselaer Polytechnic Institute in 2006.  In 2007, a conference at MIT concluded that with 3,000+ published studies from around the world, “the question of whether Cold Fusion is real is not the issue.  Now the question is whether or not it can be made commercially viable, and for that, some serious funding is needed.” (Wired; Aug. 22, 2007)  Still, the mainstream scientific community covers their ears, shuts their eyes, and shakes their heads.

So now we have the latest demonstration of cold fusion, courtesy of Italian scientists Andrea Rossi and Sergio Focardi from the University of Bologna, who announced last month that they developed a cold fusion device capable of producing 12,400 W of heat power with an input of just 400 W.

The scientific basis for a cold fusion reaction will be discovered.  The only question is when.

coldfusion185

WikiLeaks, Denial of Service Attacks, and Nanobot Clouds

The recent firestorm surrounding WikiLeaks reminds me of one of Neal Stephenson’s visions of the future, “Diamond Age,” written back in 1995.  The web was only at its infancy, but Stephenson had already envisioned massive clouds of networked nanobots, some under control of the government, some under control of other entities.  Such nanobot swarms, also known as Utility Fogs, could be made to do pretty much anything; form a sphere of protection, gather information, inspect people and report back to a central server, or be commanded to attack each other.  One swarm under control of one organization may be at war with another swarm under the control of another organization.  That is our future.  Nanoterrorism.

A distributed denial of service attack (DDoS) is a network attack on a particular server or internet node.  It is often carried out by having thousands of computers saturate the target machine with packet requests, making it impossible for the machine to respond to normal HTTP requests, effectively bringing it to its knees, inaccessible on the internet.  The attacks are often coordinated by a central source who takes advantage of networks of already compromised computers (aka zombie computers, usually unknown to their owners) via malware inflections.  On command, these botnets initiate their attack with clever techniques called Smurf attacks, Ping floods, SYN floods, and other scary sounding events.  An entire underground industry has built up around botnets, some of which can number in the millions.  Botnets can be leased by anyone who knows how to access them and has a few hundred dollars.  As a result, an indignant group can launch an attack on, say, the WikiLeaks site.  And, in response, a WikiLeak support group can launch a counter attack on its enemies, like MasterCard, Visa, and PayPal for their plans to terminate service for WikiLeaks.  That is our present.  Cyberterrorism.

Doesn’t it sound a lot like the nanoterrorism envisioned by Stephenson?  Except it is still grounded in the hardware.  As I see it, the equation of the future is:

Nanoterrorism = Cyberterrorism + Microrobotics + Moore’s Law + 20 years.

Can’t wait!

ddos2 nanobots2_185

Why Worry about ET, Stephen Hawking?

Famous astrophysicist, Stephen Hawking, made the news recently when he called for us to stop attempting to contact ET.  No offense to Dr. Hawking and other scientists who have similar points of view, but I find the whole argument about dangerous ET’s, to use a Vulcan phrase, “highly illogical.”

First of all, there is the whole issue around the ability to contact ET.  As I showed in my post “Could Gliesians be Watching Baywatch“, it is virtually impossible to communicate with any extraterrestrial civilization beyond our solar system without significant power and antenna gain.  The world’s most powerful radio astronomy dish at Arecibo has a gain of 60 dB, which means that it could barely detect a 100 kilowatt non-directional signal generated from a planet 20 light years away, such as Gliese 581g, but only if it were pointed right at it.  More to the point, what are the odds that such a civilization would be at the right level of technology to be communicating with us, using a technique that overlaps what we know?

Using the famous Drake equation, N=R*·fp·ne·fl·fi·fc·L, with the following best estimates for parameters: R*= 10/year, fp= .5, ne= 2, fl= .5, fi= .001 (highly speculative), fc= .01, L=50 (duration in years of the radio transmitting period of a civilization), we get .0025 overlapping radio wave civilizations per galaxy.  But if you then factor in the (im)probabilities of reaching those star systems (I used a megawatt of power into an Arecibo-sized radio telescope), the likelihood of another “advanced technology” civilization even developing radio waves, the odds that we happen to be  pointing our radio telescope arrays at each other at the same time, and the odds that we are using the same frequency, we get a probability of 1.25E-22.  For those who don’t like scientific notation, how about .0000000000000000000000125.  (Details will be in a forthcoming paper that I will post on this site.  I’ll replace this text with the link once it is up)

So why is Stephen Hawking worried about us sending a message that gets intercepted by ET?  Didn’t anyone do the math?

But there is a second science/sci-fi meme that I also find highly illogical.  And that is that malevolent ETs may want to mine our dear old earth for some sort of mineral.  Really?  Are we to believe that ET has figured out how to transcend relativity, exceed the speed of light, power a ship across the galaxy using technology far beyond our understanding, but still have an inability to master the control of the elements?  We have been transmuting elements for 70 years.  Even gold was artificially created by bombarding mercury atoms with neutrons as far back as 1941.  Gold could be created in an accelerator or nuclear reactor at any time, although to be practical from an economic standpoint, we may need a few years.  However, if gold, or any particular element, was important enough to be willing to fly across the galaxy and repress another civilization for, then economics should not be an issue.  Simple nuclear technology can create gold far easier than it can power a spaceship at near light speeds through space.

Even if our space traveling friends need something on Earth that can’t possibly be obtained through technology, would they really be likely to be so imperialistic as to invade and steal our resources?  From the viewpoint of human evolution, as technology and knowledge has developed, so have our ethical sensibilities and social behavior.  Of course, there is still “Jersey Shore” and “Jackass,” but by and large we have advanced our ethical values along with our technological advances and there is no reason to think that these wouldn’t also go hand in hand with any other civilization.

So while I get that science fiction needs to have a compelling rationale for ET invasion because it is a good story, I fail to understand the fear that some scientists have that extraterrestrials will actually get all Genghis Khan on us.