Things We Can Never Comprehend

Have you ever wondered what we don’t know?  Or, to put it another way, how many mysteries of the universe are still to be discovered?

To take this thought a step further, have you ever considered that there may be things that we CAN’T understand, no matter how hard we try?

This idea may be shocking to some, especially to those scientists who believe that we are nearing the “Grand Unified Theory”, or “Theory of Everything” that will provide a simple and elegant solution to all forces, particles, and concepts in science.  Throughout history, the brightest of minds have been predicting the end of scientific inquiry.  In 1871, James Clerk Maxwell lamented the sentiment of the day which he represented by the statement “in a few years, all great physical constants will have been approximately estimated, and that the only occupation which will be left to men of science will be to carry these measurements to another place of decimals.”

Yet, why does it always seem like the closer we get to the answers, the more monkey wrenches get thrown in the way?  In today’s world, these include strange particles that don’t fit the model.  And dark matter.  And unusual gravitational aberrations in distant galaxies.

Perhaps we need a dose of humility.  Perhaps the universe, or multiverse, or whatever term is being used these days to denote “everything that is out there” is just too far beyond our intellectual capacity.  Before you call me out on this heretical thought, consider…

The UK’s Astronomer Royal Sir Martin Rees points out that “a chimpanzee can’t understand quantum mechanics.”  Despite the fact that Richard Feynman claimed that nobody understands quantum mechanics, as Michael Brooks points out in his recent article “The limits of knowledge: Things we’ll never understand”, no matter how hard they might try, the comprehension of something like Quantum Mechanics is simply beyond the capacity of certain species of animals.  Faced with this realization and the fact that anthropologists estimate that the most recent common ancestor of both humans and chimps (aka CHLCA) was about 6 million years ago, we can draw a startling conclusion:

There are certainly things about our universe and reality that are completely beyond our ability to comprehend!

My reasoning is as follows. Chimps are certainly at least more intelligent than the CHLCA; otherwise evolution would be working in reverse.  As an upper bound of intelligence, let’s say that CHLCA and chimps are equivalent.  Then, CHLCA was certainly not able to comprehend QM (nor relativity, nor even Newtonian physics), but upon evolving into humans over 8 million years, our new species was able to comprehend these things.  8 million years represents 0.06% of the entire age of the universe (according to what we think we know).  That means that for 99.94% of the total time that the universe and life was evolving up to the current point in time, the most advanced creature on earth was incapable of understand the most rudimentary concepts about the workings of reality and the universe.  And yet, are we to suppose that in the last 0.06% of the time, a species has evolved that can understand everything?  I’m sure you see how unlikely that is.

What if our universe was intelligently designed?  The same argument would probably hold.  For some entity to be capable of creating a universe that continues to baffle us no matter how much we think we understand, that entity must be far beyond our intelligence, and therefore has utilized, in the design, concepts that we can’t hope to understand.

Our only chance for being supremely capable of understanding our world would lie in the programmed reality model.  If the creator of our simulation was us, or even an entity a little more advanced than us, it could lead us along a path of exploration and knowledge discovery that just always seems to be on slightly beyond our grasp.  Doesn’t that idea feel familiar?

chimpscratching185 humanscratching185

Is LIDA, the Software Bot, Really Conscious?

Researchers from the Cognitive Computing Research Group (CCRG) at the University of Memphis are developing a software bot known as LIDA (Learning Intelligent Distribution Agent) with what they believe to be cognition or conscious processes.  That belief rests on the idea that LIDA is modeled on a software architecture that mirrors what some believe to be the process of consciousness, called GWT, or Global Workspace Theory.  For example, LIDA follows a repetitive looping process that consists of taking in sensory input, writing it to memory, kicking off a process that scans this data store for recognizable events or artifacts, and, if something is recognized, it is broadcast to the global workspace of the system in a similar manner to the GWT model.  Timings are even tuned to more or less match human reaction times and processing delays.

I’m sorry guys, but just because you have designed a system to model the latest theory of how sensory processing works in the brain does not automatically make it conscious.  I could write an Excel macro with forced delays and process flows that resemble GWT.  Would that make my spreadsheet conscious?  I don’t THINK so.  Years ago I wrote a trading program that utilized the brain model du jour, known as neural networks.  Too bad it didn’t learn how to trade successfully, or I would be golfing tomorrow instead of going to work.  The fact is, it was entirely deterministic, as is LIDA, and there is no more reason to suspect that it was conscious than an assembly line at an automobile factory.

Then again, the standard scientific view (at least that held by most neuroscientists and biologists) is that our brain processing is also deterministic, meaning that, given the exact set of circumstances two different times (same state of memories in the brain, same set of external stimuli), the resulting thought process would also be exactly the same.  As such, so they would say, consciousness is nothing more than an artifact of the complexity of our brain.  An artifact?  I’m an ARTIFACT?

Following this reasoning from a logical standpoint, one would have to conclude that every living thing, including bacteria, has consciousness. In that view of the world, it simply doesn’t make sense to assert that there might be some threshold of nervous system complexity, above which an entity is conscious and below which it is not.  It is just a matter of degree and you can only argue about aspects of consciousness in a purely probabilistic sense; e.g. “most cats probably do not ponder their own existence.”  Taking this thought process a step further, one has to conclude that if consciousness is simply a by-product of neural complexity, then a computer that is equivalent to our brains in complexity must also be conscious.  Indeed, this is the position of many technologists who ponder artificial intelligence, and futurists, such as Ray Kurzweil.  And if this is the case, by logical extension, the simplest of electronic circuits is also conscious, in proportion to the degree in which bacteria is conscious in relation to human consciousness.  So, even an electronic circuit known as a flip-flop (or bi-stable multivibrator), which consists of a few transistors and stores a single bit of information, is conscious.  I wonder what it feels like to be a flip-flop?

Evidence abounds that there is more to consciousness than a complex system.  For one particular and very well research data point, check out Pim van Lommel’s book “Consciousness Beyond Life.”  Or my book “The Universe – Solved!”

My guess is that consciousness consists of the combination of a soul and a processing component, like a brain, that allows that soul to experience the world.  This view is very consistent with that of many philosophers, mystics, and shamans throughout history and throughout the world (which confluence of consistent yet independent thought is in itself very striking).  If true, a soul may someday make a decision to occupy a machine of sufficient complexity and design to experience what it is like to be the “soul in a machine”.  When that happens, we can truly say that the bot is conscious.  But it does not make sense to consider consciousness a purely deterministic emergent property.

hal9000_185

Cold Fusion Heats Up

People generally associate the idea of cold fusion with electrochemists Stanley Pons and Martin Fleischmann.  However, similar experiments to the ones that led to their momentous announcement and equally momentous downfall were reported as far back as the 1920s.  Austrian scientists Friedrich Paneth and Kurt Peters reported the fusion of hydrogen into helium via a palladium mesh.  Around the same time, Swedish scientist J. Tandberg announced the same results from an elecrolysis experiment using hydrogen and palladium.

Apparently, everyone forgot about those experiments when in 1989, Stanley Pons and Martin Fleischmann from the University of Utah astonished the world with their announcement of a cold fusion experimental result.  Prior to this it was considered impossible to generate a nuclear fusion reaction at anything less than the temperatures found at the core of the sun.  Standard nuclear reaction equations required temperatures in the millions of degrees to generate the energy needed to fuse light atomic nuclei together into heavier elements, in the process releasing more energy than went into the reaction.  Pons and Fleischmann, however, claimed to generate nuclear reactions at room temperatures via a reaction that generate excess energy from an electrolysis reaction with heavy water (deuterium) and palladium, similar to those in the 1920s.

When subsequent experiments initially failed to reproduce their results, they were ridiculed by the scientific community, even to the point of driving them to leave their jobs and their country, and continuing their research in France.  But, since then, despite the fact that the cultish skeptic community declared that no one was able to repeat their experiment, nearly 15,000 similar experiments have been conducted, most of which have replicated cold fusion, including those done by scientists from Oak Ridge National Laboratory and the Russian Academy of Science.

According to a 50-page report on the recent state of cold fusion by Steven Krivit and Nadine Winocur, the effect has been reproduced at a rate of 83%.  “Experimenters in Japan, Romania, the United States, and Russia have reported a reproducibility rate of 100 percent.” (Plotkin, Marc J. “Cold Fusion Heating Up — Pending Review by U.S. Department of Energy.” Pure Energy Systems News Service, 27 March, 2004.)  In 2005, table top cold fusion was reported at UCLA utilizing crystals and deuterium and confirmed by Rensselaer Polytechnic Institute in 2006.  In 2007, a conference at MIT concluded that with 3,000+ published studies from around the world, “the question of whether Cold Fusion is real is not the issue.  Now the question is whether or not it can be made commercially viable, and for that, some serious funding is needed.” (Wired; Aug. 22, 2007)  Still, the mainstream scientific community covers their ears, shuts their eyes, and shakes their heads.

So now we have the latest demonstration of cold fusion, courtesy of Italian scientists Andrea Rossi and Sergio Focardi from the University of Bologna, who announced last month that they developed a cold fusion device capable of producing 12,400 W of heat power with an input of just 400 W.

The scientific basis for a cold fusion reaction will be discovered.  The only question is when.

coldfusion185

WikiLeaks, Denial of Service Attacks, and Nanobot Clouds

The recent firestorm surrounding WikiLeaks reminds me of one of Neal Stephenson’s visions of the future, “Diamond Age,” written back in 1995.  The web was only at its infancy, but Stephenson had already envisioned massive clouds of networked nanobots, some under control of the government, some under control of other entities.  Such nanobot swarms, also known as Utility Fogs, could be made to do pretty much anything; form a sphere of protection, gather information, inspect people and report back to a central server, or be commanded to attack each other.  One swarm under control of one organization may be at war with another swarm under the control of another organization.  That is our future.  Nanoterrorism.

A distributed denial of service attack (DDoS) is a network attack on a particular server or internet node.  It is often carried out by having thousands of computers saturate the target machine with packet requests, making it impossible for the machine to respond to normal HTTP requests, effectively bringing it to its knees, inaccessible on the internet.  The attacks are often coordinated by a central source who takes advantage of networks of already compromised computers (aka zombie computers, usually unknown to their owners) via malware inflections.  On command, these botnets initiate their attack with clever techniques called Smurf attacks, Ping floods, SYN floods, and other scary sounding events.  An entire underground industry has built up around botnets, some of which can number in the millions.  Botnets can be leased by anyone who knows how to access them and has a few hundred dollars.  As a result, an indignant group can launch an attack on, say, the WikiLeaks site.  And, in response, a WikiLeak support group can launch a counter attack on its enemies, like MasterCard, Visa, and PayPal for their plans to terminate service for WikiLeaks.  That is our present.  Cyberterrorism.

Doesn’t it sound a lot like the nanoterrorism envisioned by Stephenson?  Except it is still grounded in the hardware.  As I see it, the equation of the future is:

Nanoterrorism = Cyberterrorism + Microrobotics + Moore’s Law + 20 years.

Can’t wait!

ddos2 nanobots2_185

Why Worry about ET, Stephen Hawking?

Famous astrophysicist, Stephen Hawking, made the news recently when he called for us to stop attempting to contact ET.  No offense to Dr. Hawking and other scientists who have similar points of view, but I find the whole argument about dangerous ET’s, to use a Vulcan phrase, “highly illogical.”

First of all, there is the whole issue around the ability to contact ET.  As I showed in my post “Could Gliesians be Watching Baywatch“, it is virtually impossible to communicate with any extraterrestrial civilization beyond our solar system without significant power and antenna gain.  The world’s most powerful radio astronomy dish at Arecibo has a gain of 60 dB, which means that it could barely detect a 100 kilowatt non-directional signal generated from a planet 20 light years away, such as Gliese 581g, but only if it were pointed right at it.  More to the point, what are the odds that such a civilization would be at the right level of technology to be communicating with us, using a technique that overlaps what we know?

Using the famous Drake equation, N=R*·fp·ne·fl·fi·fc·L, with the following best estimates for parameters: R*= 10/year, fp= .5, ne= 2, fl= .5, fi= .001 (highly speculative), fc= .01, L=50 (duration in years of the radio transmitting period of a civilization), we get .0025 overlapping radio wave civilizations per galaxy.  But if you then factor in the (im)probabilities of reaching those star systems (I used a megawatt of power into an Arecibo-sized radio telescope), the likelihood of another “advanced technology” civilization even developing radio waves, the odds that we happen to be  pointing our radio telescope arrays at each other at the same time, and the odds that we are using the same frequency, we get a probability of 1.25E-22.  For those who don’t like scientific notation, how about .0000000000000000000000125.  (Details will be in a forthcoming paper that I will post on this site.  I’ll replace this text with the link once it is up)

So why is Stephen Hawking worried about us sending a message that gets intercepted by ET?  Didn’t anyone do the math?

But there is a second science/sci-fi meme that I also find highly illogical.  And that is that malevolent ETs may want to mine our dear old earth for some sort of mineral.  Really?  Are we to believe that ET has figured out how to transcend relativity, exceed the speed of light, power a ship across the galaxy using technology far beyond our understanding, but still have an inability to master the control of the elements?  We have been transmuting elements for 70 years.  Even gold was artificially created by bombarding mercury atoms with neutrons as far back as 1941.  Gold could be created in an accelerator or nuclear reactor at any time, although to be practical from an economic standpoint, we may need a few years.  However, if gold, or any particular element, was important enough to be willing to fly across the galaxy and repress another civilization for, then economics should not be an issue.  Simple nuclear technology can create gold far easier than it can power a spaceship at near light speeds through space.

Even if our space traveling friends need something on Earth that can’t possibly be obtained through technology, would they really be likely to be so imperialistic as to invade and steal our resources?  From the viewpoint of human evolution, as technology and knowledge has developed, so have our ethical sensibilities and social behavior.  Of course, there is still “Jersey Shore” and “Jackass,” but by and large we have advanced our ethical values along with our technological advances and there is no reason to think that these wouldn’t also go hand in hand with any other civilization.

So while I get that science fiction needs to have a compelling rationale for ET invasion because it is a good story, I fail to understand the fear that some scientists have that extraterrestrials will actually get all Genghis Khan on us.

 

Rewriting the Past

“I don’t believe in yesterday, by the way.”
-John Lennon

The past is set in stone, right?  Everything we have learned tells us that you can not change the past, 88-MPH DeLoreans notwithstanding.

However, it would probably surprise you to learn that many highly respected scientists, as well as a few out on the fringe, are questioning that assumption, based on real evidence.

For example, leading stem cell scientist, Dr. Robert Lanza, posits that the past does not really exist until properly observed.  His theory of Biocentrism says that the past is just as malleable as the future.

Specific experiments in Quantum Mechanics appear to prove this conjecture.  In the “Delayed Choice Quantum Eraser” experiment, “scientists in France shot photons into an apparatus, and showed that what they did could retroactively change something that had already happened.” (Science 315, 966, 2007)

Paul Davies, renowned physicist from the Australian Centre for Astrobiology at Macquarie University in Sydney, suggests that conscious observers (us) can effectively reach back in history to “exert influence” on early events in the universe, including even the first moments of time.  As a result, the universe would be able to “fine-tune” itself to be suitable for life.

Prefer the Many Worlds Interpretation (MWI) of Quantum Mechanics over the Copenhagen one?  If that theory is correct, physicist Saibal Mitra from the University of Amsterdam has shown how we can change the past by forgetting.  Effectively if the collective observers memory is reset prior to some event, the state of the universe becomes “undetermined” and can follow a different path from before.  Check out my previous post on that one.

Alternatively, you can disregard the complexities of quantum mechanics entirely.  The results of some macro-level experiments twist our perceptions of reality even more.  Studies by Helmut Schmidt, Elmar Gruber, Brenda Dunne, Robert Jahn, and others have shown, for example, that humans are actually able to influence past events (aka retropsychokinesis, or RPK), such as pre-recorded (and previously unobserved) random number sequences

Benjamin Libet, pioneering scientist in the field of human consciousness at  the University of California, San Francisco is well known for his controversial experiments that seem to show reverse causality, or that the brain demonstrates awareness of actions that will occur in the near future.  To put it another way, actions that occur now create electrical brain activity in the past.

And then, of course, there is time travel.  Time travel into the future is a fact, just ask any astronaut, all of whom have traveled nanoseconds into the future as a side effect of high speed travel.  Stephen Hawking predicts much more significant time travel into the future.  In the future.  But what about the past?  Turns out there is nothing in the laws of physics that prevents it.  Theoretical physicist Kip Thorne designed a workable time machine that could send you into the past.  And traveling to the past of course provides an easy mechanism for changing it.  Unfortunately this requires exotic matter and a solution to the Grandfather paradox (MWI to the rescue again here).

None of this is a huge surprise to me, since I question everything about our conventional views of reality.  Consider the following scenario in a massively multiplayer online role playing game (MMORPG) or simulation.  The first time someone plays the game, or participates in the simulation, there is an assumed “past” to the construct of the game.  Components of that past may be found in artifacts (books, buried evidence, etc.) scattered throughout the game.  Let’s say that evidence reports that the Kalimdors and Northrendians were at war during year 1999.  But the evidence has yet to be found by a player.  A game patch could easily change the date to 2000, thereby changing the past and no one would be the wiser.  But, what if someone had found the artifact, thereby setting the past in stone.  That patch could still be applied, but it would only be effective if all players who had knowledge of the artifact were forced to forget.  Science fiction, right?  No longer, thanks to an emerging field of cognitive research.  Two years ago, scientists were able to erase selected memories in mice.  Insertion of false memories is not far behind.  This will eventually perfected, and applied to humans.

At some point in our future (this century), we will be able to snort up a few nanobots, which will archive our memories, download a new batch of memories to the starting state of a simulation, and run the simulation.  When it ends, the nanobots will restore our old memories.

Or maybe this happened at some point in our past and we are really living the simulation.  There is really no way to tell.

No wonder the past seems so flexible.

back_to_the_future_poster_224

Jim and Craig Venter Argue over Who is more Synthetic: Synthia or Us?

So Craig Venter created synthetic life.  How cool is that?  I mean, really, this has been sort of a biologists holy grail for as long as I can remember.  Of course, Dr. Venter’s detractors are quick to point out that Synthia, the name given to this synthetic organism, was not really built from scratch, but sort of assembled from sub-living components and injected into a cell where it could replicate.  Either way, it is a huge step in the direction of man-made life forms.  If I were to meet Dr. Venter, the conversation might go something like this:

Jim: So, Dr. Venter, help me understand how man-made your little creation really is.  I’ve read some articles that state that while your achievement is most impressive, the cytoplasm that the genome was transplanted to was not man made.

Craig: True dat, Jim.  But we all need an environment to live in, and a cell is no different.  The organism was certainly man made, even if its environment already existed.

Jim: But wait a minute.  Aren’t we all man-made?  Wasn’t that the message in those sex education classes I took in high school?

Craig: No, the difference is that this is effectively a new species, created synthetically.

Jim: So, how different is that from a clone?  Are they also created synthetically?

Craig: Sort of, but a clone isn’t a new species.

Jim: How about genetically modified organisms then?  New species created synthetically?

Craig: Yes, but they were a modification made to an existing living organism, not a synthetically created one.

Jim: What about that robot that cleans my floor?  Isn’t that a synthetically created organism?

Craig: Well, maybe, in some sense, but can it replicate itself?

Jim: Ah, but that is just a matter of programming.  Factory robots can build cars, why couldn’t they be programmed to build other factory robots?

Craig: That wouldn’t be biological replication, like cell division.

Jim: You mean, just because the robots are made of silicon instead of carbon?  Seems kind of arbitrary to me.

Craig: OK, you’re kind of getting on my nerves, robot-boy.  The point is that this is the first synthetically created biological organism.

Jim: Um, that’s really cool and all, but we can build all kinds of junk with nanotech, including synthetic meat, and little self-replicating machines.

Craig: Neither of which are alive.

Jim: Define alive.

Craig: Well, generally life is anything that exhibits growth, metabolism, motion, reproduction, and homeostasis.

Jim: So, a drone bee isn’t alive because it can’t reproduce?

Craig: Of course, there are exceptions.

Jim: What about fire, crystals, or the earth itself.  All of those exhibit your life-defining properties.  Are they alive?

Craig: Dude, we’re getting way off topic here.  Let’s get back to synthetic organisms.

Jim: OK, let’s take a different tack.  Physicist Paul Davies said that Google is smarter than any human on the planet.  Is Google alive?  What about computer networks that can reconfigure themselves intelligently.

Craig: Those items aren’t really alive because they have to be programmed.

Jim: Yeah, and what’s that little code in Synthia’s DNA?

Craig: Uhhh…

Jim: And how do you know that you aren’t synthetic?  Is it at all possible that your world and all of your perceptions could be completely under programmed control?

Craig: I suppose it could be possible.  But I highly doubt it.

Jim: Doubt based on what? All of your preconceived notions about reality?

Craig: OK, let’s say we are under programmed control.  So what?

Jim: Well, that implies a creator.  Which in turn implies that our bodies are a creation.  Which makes us just as synthetic as Synthia.  The only difference is that you created Synthia, while we might have been created by some highly advanced geek in an other reality.

Craig: Been watching a few Wachowski Brothers movies, Jim?

Jim: Guilty as charged, Craig.

CraigVenterGod