Noise in Gravity Wave Detector may be first experimental evidence of a Programmed Reality

GEO600 is a large gravitational wave detector located in Hanover, Germany.  Designed to be extremely sensitive to fluctuations in gravity, its purpose is to detect gravitational waves from distant cosmic events.  Recently, however, it has been plagued by inexplicable noise or graininess in its measurement results (see article in New Scientist).  Craig Hogan, director of Fermilab’s Center for Particle Astrophysics, thinks that the instrument has reached the limits of spacetime resolution and that this might be proof that we live in a hologram.  Using physicists Leonard Susskind and Gerard ‘t Hooft’s theory that our 3D reality may be a projection of processes encoded on the 2D surface of the boundary of the universe, he points out that, like a common hologram, the graininess of our projection may be at much larger scales than the Planck length (10-35 meters), such as 10-16meters.

Crazy?  Is it any stranger than living in 10 spatial dimensions, living in a space of parallel realities, invisible dark matter all around us, reality that doesn’t exist unless observed, or any of a number of other mind-bending theories that most physicists believe?  In fact, as fans of this website are well aware, such experimental results are no surprise.  Just take a look at the limits of resolution in my Powers of 10 simulation in the Programmed Reality level: Powers of 10.  I arbitrarily picked 10-21 meters, but it could really be any scale where it happens.

If our universe is programmed, however, it is probably done in such a way as to be unobservable for the most part.  Tantalizing clues like GEO600 noise give us all something to speculate about.  But don’t be surprised if the effect goes away when the programmers apply a patch to improve the reality resolution for another few years.

Thanks to my photogenic cat, Scully, for providing an example of grainy reality…
Scully, various resolutions

The Singularity Cometh? Or not?

There is much talk these days about the coming Singularity.  We are about 37 years away, according to Ray Kurzweil.  For some, the prospect is exhilarating – enhanced mental capacity, ability to experience fantasy simulations, immortality.  For others, the specter of the Singularity is frightening – AI’s run amok, all Terminator-like.  Then there are those who question the entire idea.  A lively debate on our forum triggered this post as we contrasted the position of transhumanists (aka cybernetic totalists) and singularity-skeptics.

For example, Jaron Lanier’s “One Half of a Manifesto” published in Wired and edge.org, suggests that our inability to develop advances in software will, at least for now, prevent the Singularity from happening according to the Moore’s Law pace.  One great quote from his demi-manifesto: “Just as some newborn race of superintelligent robots are about to consume all humanity, our dear old species will likely be saved by a Windows crash. The poor robots will linger pathetically, begging us to reboot them, even though they’ll know it would do no good.”  Kurzweil countered with a couple specific examples of successful software advances, such as speech recognition (which is probably due more to algorithm development than software techniques).

I must admit, I am also disheartened by the slow pace of software advances.  Kurzweil is not the only guy on the planet to have spent his career living and breathing software and complex computational systems.  I’ve written my share of gnarly assembly code, neural nets, and trading systems.  But, it seems to be that it takes almost as long to open a Word document, boot up, or render a 3D object on today’s blazingly fast PCs as it did 20 years ago on a machine running at less than 1% of today’s clock rate.  Kurzweil claims that we have simply forgotten: “Jaron has forgotten just how unresponsive, unwieldy, and limited they were.”

So, I wondered, who is right?  Are there objective tests out there?  I found an interesting article in PC World that compared the boot-up time from a 1981 PC to that of a 2001 PC.  Interestingly, the 2001 was over 3 times slower (51 seconds for boot up) than its 20-year predecessor (16 seconds).  My 2007 Thinkpad – over 50 seconds.  Yes, I know that Vista is much more sophisticated than MS-DOS and therefore consumes much more disk and memory and takes that much more time to load.  But really, are those 3D spinning doodads really helping me work better?

Then I found a benchmark comparison on the performance on 6 different Word versions over the years.  Summing 5 typical operations, the fastest version was Word 95 at 3 seconds.  Word 2007 clocked in at 12 seconds (in this test, they all ran on the same machine).

In summary, software has become bloated.  Developers don’t think about performance as much as they used to because memory and CPU speed is cheap.  Instead, the trend in software development is layers of abstraction and frameworks on top of frameworks.  Developers have become increasingly specialized (“I don’t do “Tiles”, I only do “Struts”) and very few get the big picture.

What does this have to do with the Singularity?  Simply this – With some notable exceptions, software development has not even come close to following Moore’s Law in terms of performance or reliability.  Yet, the Singularity predictions depend on it.  So don’t sell your humanity stock anytime soon.

 

Mac Guy, PC Guy

Would it really be that bad to find life in our Solar System?

Nick Bostrom wrote an interesting article for the MIT Technology Review about how he hopes that the search for life on Mars finds nothing. In it, he reasons that inasmuch as we haven’t come across any signs of intelligent life in the universe yet, advanced life must be rare. But since conditions for life aren’t particularly stringent, there must be a “great filter” that prevents life from evolving beyond a certain point. If we are indeed alone, that probably means that we have made it through the filter. But if life is found nearby, like in our solar system, then the filter is probably ahead of us, or at least ahead of the evolutionary stage of the life that we find. And the more advanced the life form that we find, the more likely that we have yet to hit the filter, which implies ultimate doom for us.

But I wonder about some of the assumptions in this argument. He argues that intelligent ETs must not exist because they most certainly should have colonized the galaxy via von Neumann probes but apparently have not done so because we do not observe them. It seems to me, however, that it is certainly plausible that a sufficiently advanced civilization can be effectively cloaked from a far less advanced one. Mastery of some of those other 6 or 7 spatial dimensions that string theory predicts comes to mind. Or invisibility via some form of electromagnetic cloaking. And those are only early 21st century ideas. Imagine the possibilities of being invisible in a couple hundred years.

Then there is the programmed reality model. If the programmers placed multiple species in the galaxy for “players” to inhabit, it would certainly not be hard to keep some from interacting with each other, e.g. until the lesser civilization proves its ability to play nicely. Think about how some virtual reality games allow the players to walk through walls. It is a simple matter to maintain multiple domains of existence in a single programmed construct!  More support for the programmed reality model?…

(what do you think about the possibilities of life elsewhere? take our polls!)

Martian