Things We Can’t Feel – The Mystery Deepens

In my last blog “Things We Can’t See”, we explored the many different ways that our eyes, brains, and/or technology can fool us into seeing something that isn’t there or not seeing something that is.

So apparently, our sense of sight is not necessarily the most reliable sense in terms of identifying what is and isn’t in our objective reality.  We would probably suspect that our sense of touch is fairly foolproof; that is, if an object is “there”, we can “feel” it, right?

Not so fast.

First of all, we have a lot of the same problems with the brain as we did with the sense of sight.  The brain processes all of that sensory data from our nerve endings.  How do we know what the brain really does with that information?  Research shows that sometimes your brain can think that you are touching something that you aren’t or vice versa.  People who have lost limbs still have sensations in their missing extremities.  Hypnosis has been shown to have a significant effect in terms of pain control, which seems to indicate the mind’s capacity to override one’s tactile senses.  And virtual reality experiments have demonstrated the ability for the mind to be fooled into feeling something that isn’t there.

In addition, technology can be made to create havoc with our sense of touch, although the most dramatic of such effects are dozens of years into the future.  Let me explain…

Computer Scientist J. Storrs Hall developed the concept of a “Utility Fog.”  Imagine a “nanoscopic” object called a Foglet, which is an intelligent nanobot, capable of communicating with its peers and having arms that can hook together to form larger structures.  Trillions of these Foglets could conceivably fill a room and not be at all noticeable as long as they were in “invisible mode.”  In fact, not only might they be programmed to appear transparent to the sight, but they may be imperceptible to the touch.  This is not hard to imagine, if you allow that they could have sensors that detect your presence.  For example, if you punch your fist into a swarm of nanobots programmed to be imperceptible, they would sense your motion and move aside as you swung your fist through the air.  But at any point, they could conspire to form a structure – an impenetrable wall, for example.  And then your fist would be well aware of their existence.  In this way, technology may be able to have a dramatic effect on our complete ability to determine what is really “there.”

nanobot

But even now, long before nanobot swarms are possible, the mystery really begins, as we have to dive deeply into what is meant by “feeling” something.

Feeling is the result of a part of our body coming in contact with another object.  That contact is “felt” by the interaction between the molecules of the body and the molecules of the object.

Even solid objects are mostly empty space.  If subatomic particles, such as neutrons, are made of solid mass, like little billiard balls, then 99.999999999999% of normal matter would still be empty space.  That is, of course, unless those particles themselves are not really solid matter, in which case, even more of space is truly empty, more about which in a bit.

So why don’t solid objects like your fist slide right through other solid objects like bricks?  Because of the repulsive effect that the electromagnetic force from the electrons in the fist apply against the electromagnetic force from the electrons in the brick.

But what about that neutron?  What is it made of?  Is it solid?  Is it made of the same stuff as all other subatomic particles?

The leading theories of matter do not favor the idea that subatomic particles are like little billiard balls of differing masses.  For example, string theorists speculate that all particles are made of the same stuff; namely, vibrating bits of string.  Except that they each vibrate at different frequencies.  Problem is, string theory is purely theoretical and really falls more in the mathematical domain than the scientific domain, inasmuch as there is no supporting evidence for the theory.  If it does turn out to be true, even the neutron is mostly empty space because the string is supposedly one-dimensional, with a theoretical cross section of a Planck length.

Here’s where it gets really interesting…

Neutrinos are an extremely common yet extremely elusive particle of matter.  About 100 trillion neutrinos generated in the sun pass through our bodies every second.  Yet they barely interact at all with ordinary matter.  Neutrino capture experiments consist of configurations such as a huge underground tank containing 100,000 gallons of tetrachloroethylene buried nearly a mile below the surface of the earth.  100 billion neutrinos strike every square centimeter of the tank per second.  Yet, any particular molecule of tetrachloroethylene is likely to interact with a neutrino only once every 10E36 seconds (which is 10 billion billion times the age of the universe).

The argument usually given for the neutrino’s elusiveness is that they are massless (and therefore not easily captured by a nucleus) and charge-less (and therefore not subject to the electromagnetic force).  Then again, photons are massless and charge-less and are easily captured, to which anyone who has spent too much time in the sun can attest.  So there has to be some other reason that we can’t detect neutrinos.  Unfortunately, given the current understanding of particle physics, no good answer is forthcoming.

And then there is dark matter.  This concept is the current favorite explanation for some anomalies around orbital speeds of galaxies.  Gravity can’t explain the anomalies, so dark matter is inferred.  If it really exists, it represents about 83% of the mass in the universe, but doesn’t interact again with any of the known forces with the exception of gravity.  This means that dark matter is all around us; we just can’t see it or feel it.

So it seems that modern physics allows for all sorts of types of matter that we can’t see or feel.  When you get down to it, the reason for this is that we don’t understand what matter is at all.  According to the standard model of physics, particles should have no mass, unless there is a special quantum field that pervades the universe and gives rise to mass upon interacting with those particles.  Unfortunately, for that to have any credibility, the signature particle, the Higgs boson, would have to exist.  Thus far, it seems to be eluding even the most powerful of particle colliders.  One alternative theory of matter has it being an emergent property of particle fluctuations in the quantum vacuum.

For a variety of reasons, some of which are outlined in “The Universe – Solved!” and many others which have come to light since I wrote that book, I suspect that ultimately matter is simply a property of an entity that is described purely by data and a set of rules, driven by a complex computational mechanism.  Our attempt to discover the nature of matter is synonymous with our attempt to discover those rules and associated fundamental constants (data).

In terms of other things that we can’t perceive, new age enthusiasts might call out ghosts, spirits, auras, and all sorts of other mysterious invisible and tenuous entities.

starwarsghosts

Given that we know that things exist that we can’t perceive, one has to wonder if it might be possible for macroscopic objects, or even macroscopic entities that are driven by similar energies as humans, to be made from stuff that we can only tenuously detect, not unlike neutrinos or dark matter.  Scientists speculate about multiple dimensions and parallel universes via Hilbert Space and other such constructs.  If such things exist (and wouldn’t it be hypocritical of anyone to speculate or work out the math for such things if it weren’t possible for them to exist?), the rules that govern our interaction with them, across the dimensions, are clearly not at all understood.  That doesn’t mean that they aren’t possible.

In fact, the scientific world is filled with trends leading toward the implication of an information-based reality.

In which almost anything is possible.

Things We Can’t See

When you think about it, there is a great deal out there that we can’t see.

Our eyes only respond to a very narrow range of electromagnetic radiation.  The following diagram demonstrates just how narrow our range of vision compared to the overall electromagnetic spectrum.

em_spectrum

So we can’t see anything that generates or reflects wavelengths equal to or longer than infrared, as the following image demonstrates.  Even the Hubble Space Telescope can’t see the distant infrared galaxy that the Spitzer Space Telescope can see with its infrared sensors.

(http://9-4fordham.wikispaces.com/Electro+Magnetic+Spectrum+and+light)

600px-Distant_Galaxy_in_Visible_and_Infrared

And we can’t see anything that generates or reflects wavelengths equal to or shorter than ultraviolet, as the image from NASA demonstrates at left. Only instruments with special sensors that can detect ultraviolet or x-rays can see some of the objects in the sky.

Of course, we can’t see things that are smaller in size than about 40 microns, which includes germs and molecules.

 

 

We can’t see things that are camouflaged by technology, such as the Mercedes in the following picture.

invisiblemercedes

Sometimes, it isn’t our eyes that can’t sense something that is right in front of us, but rather, our brain.  We actually stare at our noses all day long but don’t notice because our brains effectively subtract it out from our perception, given that we don’t really need it.  Our brains also fill in the imagery that is missing from the blind spot that we all have due to the optic nerve in our retinas.

In addition to these limitations of static perception, there are significant limitations to how we perceive motion.  It actually does not take much in terms of speed to render something invisible to our perception.

Clearly, we can’t see something zip by as fast as a bullet, which might typically move at speeds of 700 mph or more.  And yet, a plane moving at 700 mph is easy to see from a distance.  Our limitations of motion perception are a function of the speed of the object and the size of the image that it casts upon your retina; e.g. for a given speed, the further away something is, the larger it has to be to register in our conscious perception.  This is because our perception of reality refreshes no more than 13-15 times per second, or every 77 ms. So, if something is moving so fast that it passes by our frame of perception in less than 77 ms or so, or it is so small that it doesn’t make a significant impression in our conscious perception within that time period, we simply won’t be aware of its existence.

It makes one wonder what kinds of things may be in our presence, but moving too quickly to be observed.  Some researchers have captured objects on high-speed cameras, for which there appears to be no natural explanation.  For example, there is this strange object captured on official NBC video at an NFL football game in 2011:  Whether these objects have mundane explanations or might be hints of something a little more exotic, one thing is for certain: our eye cannot capture them.  They are effectively invisible to us, yet exist in our reality.

In my next blog we will dive down the rabbit hole and explore the real possibilities that things exist around us that we can’t even touch.

FTL Neutrinos are not Dead Yet!

So, today superluminal neutrinos are out.  Another experiment called ICARUS, from the same laboratory whence the OPERA results came, recently announced their findings that neutrinos do not travel faster than light.

It is a little surprising how eager scientists were to get that experimental anomaly behind them.  Almost as if the whole idea so threatened the foundation of their world that  they couldn’t wait to jump on the anti-FTL-neutrino bandwagon.  For a complete non-sequitor, I am reminded of the haste with which Oswald was fingered as JFK’s assassin.  No trial needed.  Let’s just get this behind us.

A blog on the Discover Magazine site referred to this CERN announcement as “the nail in the coffin” of superluminal neutrinos.  Nature magazine reported that Adam Falkowski, a physicist from the University of Paris-South said “The OPERA case is now conclusively closed”

Really?

Since when are two conflicting results an indication that one of them is conclusive?  It seems to me that until the reason for OPERA’s superluminal results is determined, the case is still open.

In software engineering, there is such a thing as a non-reproduceable defect.  A record of the defect is opened and if the defect is not reproduceable, it just sits there.  Over time, if the defect is no longer observed, it becomes less and less relevant and the priority of the defect decreases.  Eventually, one assumes that it was due to “user error” or something, and it loses status as a bona fide defect.

The same should hold for anomalous FTL events.  If they are reproduceable, we have new physics.  If not, it is still an anomaly to be investigated and root cause analyzed.

In fact, interestingly enough, the arxiv article shows that the average neutrino speed in the NEW experiment is still .3 ns faster than light speed would predict and more neutrinos were reported faster than the speed of light than slower.  Admittedly, this is well within the experimental error bar, but it does seem to indicate that neutrinos travel at c, the speed of light, which means that they should not have any mass.  Yet other experiments indicate that they do indeed have mass.

And then there was the result of the MINOS experiment in 2007 which also indicated faster than light neutrinos, although not at as statistically significant of a level as with OPERA.

So, we are still left with many neutrino anomalies:

– Two experiments that indicate faster than light speeds.
– Conflicting experiments regarding the possibility of neutrino mass.
– Mysterious transformations of one type of neutrino to another mid-flight.
– And the very nature of their very tenuous interaction with “normal matter,” not unlike dark matter.

Theories abound regarding the possibilities of neutrinos or dark matter existing in, or traveling through, higher dimensions.

How can anyone be so confident that there is a nail in the coffin of any scientific anomaly?

bringoutyerdead

The Observer Effect and Entanglement are Practically Requirements of Programmed Reality

Programmed Reality has been an incredibly successful concept in terms of explaining the paradoxes and anomalies of Quantum Mechanics, including non-Reality, non-Locality, the Observer Effect, Entanglement, and even the Retrocausality of John Wheeler’s Delayed Choice Quantum Eraser experiment.

I came up with those explanations by thinking about how Programmed Reality could explain such curiosities.

But I thought it might be interesting to view the problem in the reverse manner.  If one were to design a universe-simulating Program, what kinds of curiosities might result from an efficient design?  (Note: I fully realize that any entity advanced enough to simulate the universe probably has a computational engine that is far more advanced that we can even imagine; most definitely not of the von-Neumann variety.  Yet, we can only work with what we know, right?)

So, if I were to create such a thing, for instance, I would probably model data in the following manner:

For any space unobserved by a conscious entity, there is no sense in creating the reality for that space in advance.  It would unnecessarily consume too many resources.

For example, consider the cup of coffee on your desk.  Is it really necessary to model every single subatomic particle in the cup of coffee in order to interact with it in the way that we do?  Of course not.  The total amount of information contained in that cup of coffee necessary to stimulate our senses in the way that it does (generate the smell that it does; taste the way it does; feel the way it does as we drink it; swish around in the cup the way that it does; have the little nuances, like tiny bubbles, that make it look real; have the properties of cooling at the right rate to make sense, etc.) might be 10MB or so.  Yet, the total potential information content in a cup of coffee is 100,000,000,000 MB, so there is a ratio of perhaps 100 trillion in compression that can be applied to an ordinary object.

But once you decide to isolate an atom in that cup of coffee and observe it, the Program would then have to establish a definitive position for that atom, effectively resulting in the collapse of the wave function, or decoherence.  Moreover, the complete behavior of the atom, at that point, might be forever under control of the program.  After all, why delete the model once observed, in the event (probably fairly likely) that it will be observed again at some point in the future.  Thus, the atom would have to be described by a finite state machine.  It’s behavior would be decided by randomly picking values of the parameters that drive that behavior, such as atomic decay.  In other words, we have created a little mini finite state machine.

So, the process of “zooming in” on reality in the Program would have to result in exactly the type of behavior observed by quantum physicists.  In other words, in order to be efficient, resource-wise, the Program decoheres only the space and matter that it needs to.

Let’s say we zoom in on two particles at the same time; two that are in close proximity to each other.  Both would have to be decohered by the Program.  The decoherence would result in the creation of two mini finite state machines.  Using the same random number seed for both will cause the state machines to forever behave in an identical manner.

No matter how far apart you take the particles.  i.e…

Entanglement!

So, Observer Effect and Entanglement might both be necessary consequences of an efficient Programmed Reality algorithm.

 

coffee185 entanglement185

Pathological Skepticism

“All great truths began as blasphemies” – George Bernard Shaw

  • In the 1800’s, the scientific community viewed reports of rocks falling from the sky as “pseudoscience” and those who reported them as “crackpots,” only because it didn’t fit in with the prevailing view of the universe. Today, of course, we recognize that these rocks could be meteorites and such reports are now properly investigated.
  • In 1827, Georg Ohm’s initial publication of what became “Ohm’s Law” met with ridicule, dismissal, and was called “a web of naked fantasies.” The German Minister of Education proclaimed that “a professor who preached such heresies was unworthy to teach science.” 20 yrs passed before scientists began to recognize its importance.
  • Louis Pasteur’s theory of germs was called “ridiculous fiction” by Pierre Pachet, Professor of Physiology at Toulouse in1872.
  • Spanish researcher Marcelino de Sautuola discovered cave art in Altamira cave (northern Spain), which he recognized as stone age and published a paper about it in 1880.  His integrity was violently attacked by the archaeological community, and he died disillusioned and broken.  Yet he was vindicated 10 years after death.
  • Lord Haldane, the Minister of War in Britain, said that “the aeroplane will never fly” in 1907.  Ironically, this was four years after the Wright Brothers made their first successful flight at Kitty Hawk, North Carolina.  After Kitty Hawk, the Wrights flew in open fields next to a busy rail line in Dayton OH for almost an entire year. US authorities refused to come to the demos, while Scientific American published stories about “The Lying Brothers.”
  • In 1964, physicist George Zweig proposed the existence of quarks.  As a result of this theory, he was rejected for position at major university and considered a “charlatan.”  Today, of course, it is an accepted part of standard nuclear model.

Note that these aren’t just passive disagreements.  The skeptics use active and angry language, with words like “charlatan,” “ridiculous,” lying,” “crackpot,” and “pseudoscience.”

This is partly due to a natural psychological effect, known as “fear of the unknown” or “fear of change.”  Psychologists who have studied human behavior have more academic sounding names for it, such as the “Mere Exposure Effect”, “Familiarity Principle”, or Neophobia (something that might have served Agent Smith well).  Ultimately, this may be an artifact of evolution.  Hunter-gatherers did not pass on their genes if they had a habit of eating weird berries, venturing too close to the saber-toothed cats, or other unconventional activities.  But we are no longer hunter-gatherers.  For the most part, we shouldn’t fear the unknown.  We should feel empowered to challenge assumptions.  The scientific method can weed out any undesirable ideas naturally.

But, have you also noticed how the agitation ratchets up the more you enter the realm of the “expert?”

“The expert knows more and more about less and less until he knows everything about nothing.” – Mahatma Gandhi

This is because the expert may have a lot to lose if they stray too far from the status quo.  Their research funding, tenure, jobs, reputations are all at stake.  This is unfortunate, because it feeds this unhealthy behavior.

So I thought I would do my part to remind experts and non-experts alike that breakthroughs only occur when we challenge conventional thinking, and we shouldn’t be afraid of them.

The world is full of scared “experts”, but nobody will ever hear of them.  But they will hear about the brave ones, who didn’t fear to challenge the status quo.  People like Copernicus, Einstein, Georg Ohm, Steve Jobs, and Elon Musk.

And it isn’t like we are so enlightened today that such pathological skepticism no longer occurs.

Remember Stanley Pons and Martin Fleischmann?  Respected electrochemists, ridiculed out of their jobs and their country by skeptics.  Even “experts” violently contradicted each other:

  • “It’s pathological science,” said physicist Douglas Morrison, formerly of CERN. “The results are impossible.”
  • “There’s very strong evidence that low-energy nuclear reactions do occur” said George Miley (who received Edward Teller medal for research in hot fusion.). “Numerous experiments have shown definitive results – as do my own.”

Some long-held assumptions are being overturned as we speak.  Like LENR (Low Energy Nuclear Reactions; the new, less provocative name for cold fusion.

And maybe the speed of light as an ultimate speed limit.

These are exciting times for science and technology.  Let’s stay open minded enough to keep them moving.

Yesterday’s Sci-Fi is Tomorrow’s Technology

It is the end of 2011 and it has been an exciting year for science and technology.  Announcements about artificial life, earthlike worlds, faster-than-light particles, clones, teleportation, memory implants, and tractor beams have captured our imagination.  Most of these things would have been unthinkable just 30 years ago.

So, what better way to close out the year than to take stock of yesterday’s science fiction in light of today’s reality and tomorrow’s technology.  Here is my take:

yesterdaysscifi

Abiotic Oil or Panspermia – Take Your Pick

Astronomers from the University of Hong Kong investigated infrared emissions from deep space and everywhere they look they find signatures of complex organic matter.

You read that right.  Complex organic molecules; the kind that are the building blocks of life!

How they are created in the stellar infernos is a complete mystery.  The chemical structure of these molecules is similar to that of coal or oil, which, according to mainstream science, come from ancient biological material.

So, there seem to be only two explanations, each of which has astounding implications.

One possibility is that the molecules responsible for these spectral signatures are truly organic, in the biological “earth life” sense of the world.  I don’t think I have to point out the significance of that possibility.  It would certainly give new credence to the panspermia theory, suggesting that we are but distant relatives or descendents of life forms that permeate the universe.  ETs are our brothers.

The other possibility is that these molecules are organic but not of biological origin.  Instead, they are somehow created within the star itself.  Given that they resemble organic molecules in coal and oil, it would seem to indicate that if such molecules can be generated non-biologically in stars, and the earth was created from the same protoplanetary disk that formed our sun, oil and coal are probably also not created from biological organic material.

In other words, this discovery seems to lend a lot of support to the abiotic oil theory.

That or we have evidence that we are not alone.

Either way, a significant find.

Buried in the news.

Things We Can Never Comprehend

Have you ever wondered what we don’t know?  Or, to put it another way, how many mysteries of the universe are still to be discovered?

To take this thought a step further, have you ever considered that there may be things that we CAN’T understand, no matter how hard we try?

This idea may be shocking to some, especially to those scientists who believe that we are nearing the “Grand Unified Theory”, or “Theory of Everything” that will provide a simple and elegant solution to all forces, particles, and concepts in science.  Throughout history, the brightest of minds have been predicting the end of scientific inquiry.  In 1871, James Clerk Maxwell lamented the sentiment of the day which he represented by the statement “in a few years, all great physical constants will have been approximately estimated, and that the only occupation which will be left to men of science will be to carry these measurements to another place of decimals.”

Yet, why does it always seem like the closer we get to the answers, the more monkey wrenches get thrown in the way?  In today’s world, these include strange particles that don’t fit the model.  And dark matter.  And unusual gravitational aberrations in distant galaxies.

Perhaps we need a dose of humility.  Perhaps the universe, or multiverse, or whatever term is being used these days to denote “everything that is out there” is just too far beyond our intellectual capacity.  Before you call me out on this heretical thought, consider…

The UK’s Astronomer Royal Sir Martin Rees points out that “a chimpanzee can’t understand quantum mechanics.”  Despite the fact that Richard Feynman claimed that nobody understands quantum mechanics, as Michael Brooks points out in his recent article “The limits of knowledge: Things we’ll never understand”, no matter how hard they might try, the comprehension of something like Quantum Mechanics is simply beyond the capacity of certain species of animals.  Faced with this realization and the fact that anthropologists estimate that the most recent common ancestor of both humans and chimps (aka CHLCA) was about 6 million years ago, we can draw a startling conclusion:

There are certainly things about our universe and reality that are completely beyond our ability to comprehend!

My reasoning is as follows. Chimps are certainly at least more intelligent than the CHLCA; otherwise evolution would be working in reverse.  As an upper bound of intelligence, let’s say that CHLCA and chimps are equivalent.  Then, CHLCA was certainly not able to comprehend QM (nor relativity, nor even Newtonian physics), but upon evolving into humans over 8 million years, our new species was able to comprehend these things.  8 million years represents 0.06% of the entire age of the universe (according to what we think we know).  That means that for 99.94% of the total time that the universe and life was evolving up to the current point in time, the most advanced creature on earth was incapable of understand the most rudimentary concepts about the workings of reality and the universe.  And yet, are we to suppose that in the last 0.06% of the time, a species has evolved that can understand everything?  I’m sure you see how unlikely that is.

What if our universe was intelligently designed?  The same argument would probably hold.  For some entity to be capable of creating a universe that continues to baffle us no matter how much we think we understand, that entity must be far beyond our intelligence, and therefore has utilized, in the design, concepts that we can’t hope to understand.

Our only chance for being supremely capable of understanding our world would lie in the programmed reality model.  If the creator of our simulation was us, or even an entity a little more advanced than us, it could lead us along a path of exploration and knowledge discovery that just always seems to be on slightly beyond our grasp.  Doesn’t that idea feel familiar?

chimpscratching185 humanscratching185

Is LIDA, the Software Bot, Really Conscious?

Researchers from the Cognitive Computing Research Group (CCRG) at the University of Memphis are developing a software bot known as LIDA (Learning Intelligent Distribution Agent) with what they believe to be cognition or conscious processes.  That belief rests on the idea that LIDA is modeled on a software architecture that mirrors what some believe to be the process of consciousness, called GWT, or Global Workspace Theory.  For example, LIDA follows a repetitive looping process that consists of taking in sensory input, writing it to memory, kicking off a process that scans this data store for recognizable events or artifacts, and, if something is recognized, it is broadcast to the global workspace of the system in a similar manner to the GWT model.  Timings are even tuned to more or less match human reaction times and processing delays.

I’m sorry guys, but just because you have designed a system to model the latest theory of how sensory processing works in the brain does not automatically make it conscious.  I could write an Excel macro with forced delays and process flows that resemble GWT.  Would that make my spreadsheet conscious?  I don’t THINK so.  Years ago I wrote a trading program that utilized the brain model du jour, known as neural networks.  Too bad it didn’t learn how to trade successfully, or I would be golfing tomorrow instead of going to work.  The fact is, it was entirely deterministic, as is LIDA, and there is no more reason to suspect that it was conscious than an assembly line at an automobile factory.

Then again, the standard scientific view (at least that held by most neuroscientists and biologists) is that our brain processing is also deterministic, meaning that, given the exact set of circumstances two different times (same state of memories in the brain, same set of external stimuli), the resulting thought process would also be exactly the same.  As such, so they would say, consciousness is nothing more than an artifact of the complexity of our brain.  An artifact?  I’m an ARTIFACT?

Following this reasoning from a logical standpoint, one would have to conclude that every living thing, including bacteria, has consciousness. In that view of the world, it simply doesn’t make sense to assert that there might be some threshold of nervous system complexity, above which an entity is conscious and below which it is not.  It is just a matter of degree and you can only argue about aspects of consciousness in a purely probabilistic sense; e.g. “most cats probably do not ponder their own existence.”  Taking this thought process a step further, one has to conclude that if consciousness is simply a by-product of neural complexity, then a computer that is equivalent to our brains in complexity must also be conscious.  Indeed, this is the position of many technologists who ponder artificial intelligence, and futurists, such as Ray Kurzweil.  And if this is the case, by logical extension, the simplest of electronic circuits is also conscious, in proportion to the degree in which bacteria is conscious in relation to human consciousness.  So, even an electronic circuit known as a flip-flop (or bi-stable multivibrator), which consists of a few transistors and stores a single bit of information, is conscious.  I wonder what it feels like to be a flip-flop?

Evidence abounds that there is more to consciousness than a complex system.  For one particular and very well research data point, check out Pim van Lommel’s book “Consciousness Beyond Life.”  Or my book “The Universe – Solved!”

My guess is that consciousness consists of the combination of a soul and a processing component, like a brain, that allows that soul to experience the world.  This view is very consistent with that of many philosophers, mystics, and shamans throughout history and throughout the world (which confluence of consistent yet independent thought is in itself very striking).  If true, a soul may someday make a decision to occupy a machine of sufficient complexity and design to experience what it is like to be the “soul in a machine”.  When that happens, we can truly say that the bot is conscious.  But it does not make sense to consider consciousness a purely deterministic emergent property.

hal9000_185

Cold Fusion Heats Up

People generally associate the idea of cold fusion with electrochemists Stanley Pons and Martin Fleischmann.  However, similar experiments to the ones that led to their momentous announcement and equally momentous downfall were reported as far back as the 1920s.  Austrian scientists Friedrich Paneth and Kurt Peters reported the fusion of hydrogen into helium via a palladium mesh.  Around the same time, Swedish scientist J. Tandberg announced the same results from an elecrolysis experiment using hydrogen and palladium.

Apparently, everyone forgot about those experiments when in 1989, Stanley Pons and Martin Fleischmann from the University of Utah astonished the world with their announcement of a cold fusion experimental result.  Prior to this it was considered impossible to generate a nuclear fusion reaction at anything less than the temperatures found at the core of the sun.  Standard nuclear reaction equations required temperatures in the millions of degrees to generate the energy needed to fuse light atomic nuclei together into heavier elements, in the process releasing more energy than went into the reaction.  Pons and Fleischmann, however, claimed to generate nuclear reactions at room temperatures via a reaction that generate excess energy from an electrolysis reaction with heavy water (deuterium) and palladium, similar to those in the 1920s.

When subsequent experiments initially failed to reproduce their results, they were ridiculed by the scientific community, even to the point of driving them to leave their jobs and their country, and continuing their research in France.  But, since then, despite the fact that the cultish skeptic community declared that no one was able to repeat their experiment, nearly 15,000 similar experiments have been conducted, most of which have replicated cold fusion, including those done by scientists from Oak Ridge National Laboratory and the Russian Academy of Science.

According to a 50-page report on the recent state of cold fusion by Steven Krivit and Nadine Winocur, the effect has been reproduced at a rate of 83%.  “Experimenters in Japan, Romania, the United States, and Russia have reported a reproducibility rate of 100 percent.” (Plotkin, Marc J. “Cold Fusion Heating Up — Pending Review by U.S. Department of Energy.” Pure Energy Systems News Service, 27 March, 2004.)  In 2005, table top cold fusion was reported at UCLA utilizing crystals and deuterium and confirmed by Rensselaer Polytechnic Institute in 2006.  In 2007, a conference at MIT concluded that with 3,000+ published studies from around the world, “the question of whether Cold Fusion is real is not the issue.  Now the question is whether or not it can be made commercially viable, and for that, some serious funding is needed.” (Wired; Aug. 22, 2007)  Still, the mainstream scientific community covers their ears, shuts their eyes, and shakes their heads.

So now we have the latest demonstration of cold fusion, courtesy of Italian scientists Andrea Rossi and Sergio Focardi from the University of Bologna, who announced last month that they developed a cold fusion device capable of producing 12,400 W of heat power with an input of just 400 W.

The scientific basis for a cold fusion reaction will be discovered.  The only question is when.

coldfusion185