Navigating the Quantum Froth

Evidence for Programmed Reality is starting to pour in from all fields.  The latest comes from Gamma-ray imaging from deep space.  Here’s the deal:

Extremely high energy photons are known as gamma rays and are generated only in really cool places like Cern and supermassive black holes that power galaxies.  The cosmologically-originated gamma rays tend to come in bursts and there are special telescopes, such as MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov Telescope) that detect and measure these bursts.  According to all known laws of physics, all photons no matter their energy level travel at exactly the same speed, namely the speed of light.  Problem is that several gamma ray detectors have noticed that gamma rays from distant galaxies arrive on earth at slightly different times, which makes no sense.

Unless you consider that space is quantized.  Then, the photons have to work their way through the quantum “froth” and the low energy photons can do it easier than the high energy ones, much like radio waves through a low pass filter.  So says Italian physicist Giovanni Amelino-Camelia.  A couple references on this theory include an FXQi article and a recent article from New Scientist.

The reason this effect isn’t normally noticed is that the influence of quantized spacetime is so small, conventional experiments will not demonstrate its impact.  However, as we probe deeper into space and increase the sensitivity of our instruments, we ultimately get to a point where we measure things that demonstrate that the status quo in physics is just an approximation, much as Newtonian physics is just an approximation of Relativistic physics at slow speeds or Quantum Mechanics at large scales.  The recent quantization noise in the GEO600 Gravity Wave Detector is a case in point.  Because it is the most sensitive instrument of its kind, it has reached a resolution limitation that may indicate the granularity of the universe.  With MAGIC, a similar situation exists.  Because it is highly sensitive, it can detect signals whose origin are so far away that they allow for propagation deviations to occur over such a vast region of space.  The 4 minute anomaly that MAGIC observed occurs over 500 million light years.  That means that it is detecting a deviation of 1 part in about 65000000000000 (65 trillion), which apparently is enough to break known laws of physics.

I’m interested in this because the underlying reason for this may very well be the quantization of space.  If so, this and the GEO600 experiments are the first to detect it.  And, for anyone who hasn’t read “The Universe – Solved!” or meandered through this website, I ask the question:

Why might reality be quantized and not continuous?

It takes an infinite amount of resources to create a continuous reality, but a finite amount to create a quantized reality.  By resources, I refer to bits, the information that it takes to model reality.  In order to program a virtual reality, there must be quantization.  It is impossible to develop a program with unlimited resolution.  So the very fact that our reality is quantized may be considered strong evidence that reality is programmed.

What other reason could there be?

 

Gravity is Strange – Unless you understand Programmed Reality

Physicists tell us that gravity is one of the four fundamental forces of nature.  And yet it behaves quite differently than the other three.  A New Scientist article breaks down the oddities, a few of which are reproduced here:

– Gravity only pulls.  It doesn’t appear to have an opposing effect, like other forces do.  Notwithstanding the possibility that dark energy is an example of “opposite polarity” gravity, possibly due to unseen dimensions, there appears to be no solid evidence of it as there is with all other forces.

– The strength of other forces are comparable in magnitude, while gravity checks in at 40 orders of magnitude weaker.

– The fine-tuned universe, a favorite topic of this site, includes some amazing gravity-based characteristics.  The balance of early universe expansion and gravitational strength had to balance to within 1 part in 1,000,000,000,000,000 in order for life to form.

The Anthropic Principle explains all this via a combination of the existance of zillions (uncountably large number) of parallel universes with the idea that we can only exist in the one where all the variables line up perfectly for matter and life to form.  But that seems to me to be a pretty complex argument with a few embedded leaps of faith that make most religions look highly logical in comparison.

Then there is the Programmed Reality theory, which as usual, offers a perfect explanation without the need for the hand-waving Anthropic Principle and the “Many Worlds”
interpretation of quantum mechanics.  Gravity is not like other forces, so let’s not keeping trying to “force” it to be (pardon the pun.)  Instead, it is there to keep us grounded on the planet in which we play out our reality, offering the perfect balance of “pull” to keep every fly ball from flying out of the stadium (regardless of the illegal substance abuse of the hitter), to make kite flying a real possibility, and to enable a large number of other enriching activities.  While, at the same time, being weak enough to allow basketball players to dunk and planes to fly, and to enable a large number of other enriching activities.  Our scientists will continue the investigate the nature of gravity via increasingly complex projects like the LHC, unpeeling the layers of complexity that the programmers put in place to keep scientific endeavor, research, and employment moving forward.

Newton's apple  Warped spacetime

Non-locality Explained!

A great article in Scientific American, “A Quantum Threat to Special Relativity,” is well worth the read.

Locality in physics is the idea that things are only influenced by forces that are local or nearby.  The water boiling on the stovetop does so because of the energy imparted from the flame beneath.  Even the sounds coming out of your radio are decoded from the electromagnetic disturbance in the air next to the antenna, which has been propagating from the radio transmitter at the speed of light.  But, think we all, nothing can influence anything remotely without a “chain reaction” disturbance, which according to Einstein can not exceed the speed of light.

However, says Quantum Mechanics, there is something called entanglement.  No, not the kind you had with Becky under the bleachers in high school.  This kinds of entanglement says that particles that once “interacted” are forever entangled, whereby their properties are reflected in each other’s behavior.  For example, take 2 particles that came from the same reaction and separate them by galactic distances.  What one does, the other will follow.  This has been proven to a distance of at least 18 km and seems to violate Einstein’s theory of Special Relativity.

Einstein, of course, took issue with this whole concept in his famous EPR paper, preferring to believe that “hidden variables” were responsible for the effect.  But, in 1964, physicist John Bell developed a mathematical proof that no local theory can account for all of Quantum Mechanics experimental results.  In other words, the world is non-local.  Period.  It is as if, says the SciAm article, “a fist in Des Moines can break a nose in Dallas without affecting any other physical thing anywhere in the heartand. ”  Alain Aspect later performed convincing experiments that demonstrated this non-locality.  45 years after John Bell’s proof, scientists are coming to terms with the idea that the world is non-local and special relativity has limitations.  Both ideas are mind-blowing.

But, as usual, there are a couple of clever paradigms that get around it all, each of which are equally mind-blowing.  In one, our old friend the “Many Worlds” theory, zillions of parallel universes are spawned every second, which account for the seeming non-locality of reality.  In the other, “history plays itself out not in the three-dimensional spacetime of special relativity but rather this gigantic and unfamiliar configuration space, out of which the illusion of three-dimensionality somehow emerges.”

I have no problem explaining all of these ideas via programmed reality.

Special Relativity has to do with our senses, not with reality.  True simultaneity is possible because our reality is an illusion.  And there is no speed limit in the truer underlying construct.  So particles have no problem being entangled.

Many Worlds can be implemented by multiple instances of reality processes.  Anyone familiar with computing can appreciate how instances of programs can be “forked” (in Unix parlance) or “spawned” (Windows, VMS, etc.).  You’ve probably even seen it on your buggy Windows PC, when instances of browsers keep popping up like crazy and you can’t kill the tasks fast enough and end up either doing a hard shutdown or waiting until the little bastard blue-screens.  Well, if the universe is just run by a program, why can’t the program fork itself whenever it needs to, explaining all of the mysteries of QM that can’t be explained by wave functions.

And then there is “configuration space.”  Nothing more complex than multiple instances of the reality program running, with the conscious entity having the ability to move between them, experiencing reality and all the experimental mysteries of Quantum Mechanics.

Hey physicists – get your heads out of the physics books and start thinking about computer science!

(thanks to Poet1960 for allowing me to use his great artwork)

Non-locality explained

Change the Past, Change the Future Simply by Forgetting

Here’s an interesting idea.  To avoid an impending disaster, all you have to do is forget your past.  So says physicist Saibal Mitra at the University of Amsterdam.  Even changing the past seems to be possible, believe it or not.

His idea is predicated on accepting our old friend, the Everett interpretation of Quantum Mechanics, aka the Many Universes theory.  According to Mitra, if the collective observers memory is reset prior to a cataclysmic event, such as a species ending asteroid impact, the state of the universe becomes “undetermined.”  As a result, it has an equal likelihood of following any of the many subsequent paths, most of which should have nothing to do with an asteroid impact.  And so, by selectively forgetting our past, we can avoid certain doom by starting with a clean slate of future outcomes.  See this New Scientist article.

There is something unsettling about the logic, but his paper seems to be on firm footing: http://arxiv.org/abs/0902.3825.  And the implications are fascinating.  Not happy with how last year’s Superbowl turned out?  Keep a single copy of the event, erase everyone’s memory, replace all archived bits of history relating to the game, and then we can all sit back and watch the recording again.  Mitra says if we do that, there’s a good chance Arizona will win.  Watching the same tape!  Well, maybe not the same tape.  Because once the universe became undetermined again, the physical tape could have encoded any number of outcomes.

This a vaguely reminiscent of “Last Thursdayism,” which is one of the possible aspects of Programmed Reality.  Once the universe is reset from an observational standpoint, we would never know the difference and an entirely different future course of events is possible.  If you make the restart point somewhere in our current past, then the recent past can be changed too.  Programmed Reality explains it all!

Future, Past, Present

Noise in Gravity Wave Detector may be first experimental evidence of a Programmed Reality

GEO600 is a large gravitational wave detector located in Hanover, Germany.  Designed to be extremely sensitive to fluctuations in gravity, its purpose is to detect gravitational waves from distant cosmic events.  Recently, however, it has been plagued by inexplicable noise or graininess in its measurement results (see article in New Scientist).  Craig Hogan, director of Fermilab’s Center for Particle Astrophysics, thinks that the instrument has reached the limits of spacetime resolution and that this might be proof that we live in a hologram.  Using physicists Leonard Susskind and Gerard ‘t Hooft’s theory that our 3D reality may be a projection of processes encoded on the 2D surface of the boundary of the universe, he points out that, like a common hologram, the graininess of our projection may be at much larger scales than the Planck length (10-35 meters), such as 10-16meters.

Crazy?  Is it any stranger than living in 10 spatial dimensions, living in a space of parallel realities, invisible dark matter all around us, reality that doesn’t exist unless observed, or any of a number of other mind-bending theories that most physicists believe?  In fact, as fans of this website are well aware, such experimental results are no surprise.  Just take a look at the limits of resolution in my Powers of 10 simulation in the Programmed Reality level: Powers of 10.  I arbitrarily picked 10-21 meters, but it could really be any scale where it happens.

If our universe is programmed, however, it is probably done in such a way as to be unobservable for the most part.  Tantalizing clues like GEO600 noise give us all something to speculate about.  But don’t be surprised if the effect goes away when the programmers apply a patch to improve the reality resolution for another few years.

Thanks to my photogenic cat, Scully, for providing an example of grainy reality…
Scully, various resolutions

Does the Ethane lake on Titan support the abiotic oil theory?

Although shallow oil wells were drilled in China as early as the 4th century, the first commercial oil well was drilled in Canada in 1858 at the height of the industrial revolution.  Since then our use of and reliance upon it has skyrocketed.  Also since then has been a continuous debate on the origin of oil.  In one corner, weighing in at 25 billion barrels a year, we have the biogenic theory, aka dead plants and animals.  In the other corner, weighing in at 900 billion gallons a year, we have the abiotic theory, aka chemical reactions inside the Earth.

The “fossil fuel” theory was first proposed by Russian scientist Mikhailo Lomonosov in 1757 who suggested that bodies of animals from prehistoric times were buried in sediments and were transformed into hydrocarbons due to extreme pressure and temperature forces over millions of years.  The argument is supported by sound biochemical processes, such as catagenesis.  In addition, the evidence of organic pollen grains in petroleum deposits implies (but does not prove) organic origin.

The abiogenic or abiotic theory actually has its origins the 1800s, when proposed by French chemist Marcellin Berthelot and Russian chemist Dmitri Mendeleev.  According to their theory, hydrocarbons are primordial in origin and were formed by non-biological processes in the earths crust and mantle.  The theory received a modern boost by Russian geologist Kudryavtsev, studying Canadian oil sources in the 1950s and Ukrainian scientist Chekaliuk, based on thermodynamic calculations in the 1960’s, who both arrived at the same conclusion.  Esteemed and late planetary scientist Thomas Gold from Cornell University (from whom I once took a course in astronomical theories), added to the evidence in his book “The Deep Hot Biosphere.”  The theory has also attained laboratory support via experiments at Gas Resources Corporation in Houston, Texas which produced octane and methane by subjecting marble, iron oxide, and water, to temperature and pressure conditions similar to that 60 miles below the surface of the earth.  Also, deep drilling around the world has discovered oil at depths and in places where there should never have been biological remains.  Referring to natural gas wells drilled by the GHK Company in Oklahoma at 30,000 feet and Japanese wells at 4300 meters, Dr. Jerome Corsi (political scientist with a Ph.D. from Harvard University) noted:

“Even those who might stretch to argue that even if no dinosaurs ever died in sedimentary rock that today lies 30,000 feet below the surface, might still argue that those levels contain some type of biological debris that has transformed into natural gas. That argument, a stretch at 30,000 feet down, is almost impossible to make for basement structure bedrock. Japan’s Nagaoka and Niigata fields produce natural gas from bedrock that is volcanic in nature. What dinosaur debris could possibly be trapped in volcanic rock found at deep-earth levels?”

Some oil reserves even seem to have the ability to be automatically refilled, like a drink at a burger joint.  Gulf of Mexico oil field Eugene Island 330, for example, saw its production drop from 15,000 barrels a day in 1973 to 4,000 barrels a day in 1989, and then suddenly spontaneously reversed and was pumping 13,000 barrels of a “different aged” crude in 1999.  In fact, according to Christopher Cooper of the Wall Street Journal, “between 1976 and 1996, estimated global oil reserves grew 72%, to 1.04 trillion barrels.”  Considering the doubling of reserves in the Middle East alone, University of Tulsa professor Norman Hyne noted that “it would take a pretty big pile of dead dinosaurs and prehistoric plants to account for the estimated 660 billion barrels of oil in the region”

The argument is all very interesting and gets quite political as one might imagine.  But my interest revolves more around the basic question of why oil is even there at all.  Both sides propose some fairly complex theories to account for the very existence of petroleum, let alone its uncanny ability to refill known reserves automatically.  Doesn’t it almost seem like it was placed there just for our use? (see much more on Programmed Reality elsewhere on this site)

And now, there is the fact that some hydrocarbons, like methane, are known to occur throughout the solar system on supposedly lifeless planets.  Take, for example, the most recent announcement in “Nature” and “Scientific American” that a Lake Ontario-sized lake has been discovered on Saturn’s moon Titan that is composed of hydrocarbons, specifically liquid ethane.  By some estimates, the contents of this lake could be equivalent to as much as 9 trillion barrels of oil.  Even NASA suggests that Titan could have “hundreds of times more liquid hydrocarbons than all the known oil and natural gas reserves on Earth.”

Anybody see anything wrong with this picture?  Were there dinosaurs on Titan?

Doubtful!

Therefore, it seems to me, Titan gives the abiotic theory of oil a fairly sizeable boost.

(apologies to those who have read my book, “The Universe-Solved”, as much of the background on this topic come verbatim therefrom)

Titan

Roger Penrose Agrees with Me: 2+2 may not = 4!

One of the sections of “The Universe – Solved!” that generated a bit of controversy was my assertion that there is really nothing that we can know with conviction to be true.  An exerpt:

“2+2=4?  Not in Base 3, where 2+2=11.  In Base 10 (or any base >4), 2+2=4 by convention, but only in an abstract way, and not necessarily always true in the real world.  If you add 2 puddles of water to 2 puddles of water, you still have 2 (albeit larger) puddles of water.  For a more conventional example, a 2-mile straight line laid end-to-end with another 2-mile straight line will not add up to exactly 4 miles in length due to relativity and the curvature of space-time in all locales.  Therefore, 2+2=4 can not be universally true.”

In addition, You have no way of knowing whether the convention that 2+2=4 is only true in the false reality that we think we are in, but not in the real one.  Again, from the book: “So, maybe all we can know for sure is what is happening to us at this exact instant.  Then again, how do we know that we aren’t in a dream right now???  So, the set of things that are 100% true is simply the null set!”

Some readers have argued with these assertions.

So, imagine my pleasure when I read the following quote in the July 26 – August 1 issue of New Scientist magazine by esteemed mathematician and physicist Roger Penrose: “”Do we know for certain that 2 plus 2 equals 4?  Of course we don’t.  Maybe every time everybody in the whole world has ever done that calculation and reasoned it through, they’ve made a mistake.  Maybe it isn’t 4, it’s really 5.  There is a very, very small chance that this has happened.”  His argument is based on the logic of reason, which was different than my argument, but the result was the same nonetheless.

Thank you, Roger, for your enlightened point of view.  I would gladly send you a free autographed book.  Please send me your address.  Smile

Roger Penrose Penrose Tiles

Reality Doesn’t Exist, according to the latest research

A team of physicists in Vienna has conducted a set of “reality” experiments that prove to a level of 80 orders of magnitude that reality doesn’t exist unless you observe it.  In other words, in case you ever doubted the Schrodinger’s Cat thought experiment, doubt no longer.  It seems that experimental evidence has confirmed that we create our own reality by looking at it, measuring it, or observing it.  The detail are here.

The results of many of recent experiments twist our perceptions of reality even more.  Studies by Helmut Schmidt, Elmar Gruber, Brenda Dunne, Robert Jahn, and others have shown, for example, that humans are actually able to influence past events (aka retropsychokinesis, or RPK), such as pre-recorded (and previously unobserved) random number sequences.  No huge surprise to me, who questions everything about our conventional views of reality.  But I still think the evidence is fascinating and probably a bit unnerving to say the least, to the majority of those out there who don’t typically consider such things.  Cause and effect, and reality are certainly not what they seem.

What could be the explanation?  Certainly, more experiments to probe the depths of reality are needed.  But that doesn’t stop us from speculating.  Once again, Programmed Reality offers a perfect explanation.  Assuming that the programmed construct can detect “observation” (which, in principle, does not appear to be that difficult of a process), all the program has to do is the following:

if(observed)
select result from a subset of coherent results
else, randomize result

For example, in the classic reality experiment, pairs of photons are generated which are “entangled” by virtue of the fact that they were generated from the same reaction.  Those photons can be separated by large distances and then a property of one of them is measured.  The act of measuring the property of one photon immediately determines the property of the other photon, even if it is so far away that it precludes “knowing” about what is happening to its twin photon because of the limitations of exceeding the speed of light.  However, in the Programmed Reality model, the properties of the two photons can be related programmatically.  Once an experiment determines one property, the program sets the other photons property accordingly.  The program is aware of the observation and could be in full control of the properties of the paired particles.

For the RPK effect…

when(observed)
set result from archive to a subset of coherent results

For an example of this effect, imagine a set of random numbers generated programmatically and stored in some sort of archive.  The archive, of course, being a product of Programmed Reality, is under full control of the program.  The archive is not observed prior to the experiment and the subjects perform mass consciousness experiments on the data.  The program measures the level of “coherence” of the consciousness in the experiment and then sets the correlation of the stored numbers according to some algorithm, formula, or table.  When the experimenters unveil the data, lo and behold, they are not truly random, but rather, appear to be affected by the consciousness experiment.  A simple software algorithm can make this work!

The interesting question, though, is “What is the motivation behind the program?”  Why would it have such an effect?  Perhaps the answer lies in the idea that sentient beings do truly create their reality.  Much like “Sim City,” where the players create their reality, perhaps our reality is created accordingly to a complex set of rules and algorithms, which include such attributes as intent and observation.

This doesn’t prove the validity of Programmed Reality, but I have to wonder, how many anomalies does the theory have to solve, for it to be seriously considered?  Wink

IQOQI Reality Test Experiment

To Sleep, Perchance to Dream

I was reading an article the other day about a new theory on the reason that we sleep.  A UCLA researcher suggests that rather than provide some vital biological function, it appears that sleep evolved to conserve energy and “keep us out of trouble.”  So it got me thinking about all of the other theories that I have read over the years – it helps restore energy levels, it strengthens the immune system, it repairs tissues and cells, it was an evolutionary development to avoid noctural predators.  And the list goes on, with no end of confusion and no apparent scientific consensus.

I wondered, what would be the purpose of sleep in a programmed reality?

And I thought of a possibility.  In multiplayer online games, a great deal of the logic behind the game resides in the client that sits on your PC.  The storage of the overall architecture of the game, each players attributes (to avoid hacking), etc., are on the server.  So what if our brain is analgous to such a client?  Doesn’t the client need to be upgraded periodically?  Ever notice how most PCs and Macs do automatic upgrades to various client programs upon reset, or when you attempt to open the program after it has been closed?  Notice that these upgrades aren’t done while you are playing or running the program?  The reason for that is to avoid any kind of software conflict.  It is far safer, and in most cases, essential, to do upgrades while the program is not running.  And then the next time you fire it up – presto, there are the changes.

Maybe the purpose of sleep is to allow the programmers the opportunity to upgrade our memories, processing capabilities, or whatever, during a down time.  It might explain why sleep deprivation causes us to act a little strangely.  It’s kind of like trying to run an ancient version of Word on your new Vista laptop.

(thanks to my nutty cat, Simba, for the sleeping pose)