Alien Hunters Still Thinking Inside The Box (or Dyson Sphere)

As those who are familiar with my writing already know, I have long thought that the SETI program was highly illogical, for a number of reason, some of which are outlined here and here.

To summarize, it is the height of anthropomorphic and unimaginative thinking to assume that ET will evolve just like we did and develop radio technology at all.  Even if they did, and followed a technology evolution similar to our own, the era of high-powered radio broadcasts should be insignificant in relation to the duration of their evolutionary history.  In our own case even, that era is almost over, as we are moving to highly networked and low-powered data communication (e.g. Wi-Fi), which is barely detectable a few blocks away, let alone light years.  And even if we happened to overlap a 100-year radio broadcast era of a civilization in our galactic neighborhood, they would still never hear us, and vice versa, because the signal level required to reliably communicate around the world becomes lost in the noise of the cosmic microwave background radiation before it even leaves the solar system.

So, no, SETI is not the way to uncover extraterrestrial intelligences.

Dyson Sphere

Some astronomers are getting a bit more creative and are beginning to explore some different ways of detecting ET.  One such technique hinges on the concept of a Dyson Sphere.  Physicist Freeman Dyson postulated the idea in 1960, theorizing that advanced civilizations will continuously increase their demand for energy, to the point where they need to capture all of the energy of the star that they orbit.  A possible mechanism for doing so could be a network of satellites surrounding the solar system and collecting all of the energy of the star.  Theoretically, a signature of a distant Dyson Sphere would be a region of space emitting no visible light but generating high levels of infrared radiation as waste.  Some astronomers have mapped the sky over the years, searching for such signatures, but to no avail.

Today, a team at Penn State is resuming the search via data from infrared observatories WISE and Spitzer.  Another group from Princeton has also joined in the search, but are using a different technique by searching for dimming patterns in the data.

I applaud these scientists who are expanding the experimental boundaries a bit.  But I doubt that Dyson Spheres are the answer.  There are at least two flaws with this idea.

First, the assumption that we will continuously need more energy is false.  Part of the reason for this is the fact that once a nation has achieved a particular level of industrialization and technology, there is little to drive further demand.  The figure below, taken from The Atlantic article “A Short History of 200 Years of Global Energy Use” demonstrates this clearly.

per-capita-energy-consumption300

In addition, technological advances make it cheaper to obtain the same general benefit over time.  For example, in terms of computing, performing capacity per watt has increased by a factor of over one trillion in the past 50 years.  Dyson was unaware of this trend because Moore’s Law hadn’t been postulated until 1965.  Even in the highly corrupt oil industry, with their collusion, lobbying, and artificial scarcity, performance per gallon of gas has steadily increased over the years.

The second flaw with the Dyson Sphere argument is the more interesting one – the assumptions around how humans will evolve.  I am sure that in the booming 1960s, it seemed logical that we would be driven by the need to consume more and more, controlling more and more powerful tools as time went on.  But, all evidence actually points to the contrary.

We are in the beginning stages of a new facet of evolution as a species.  Not a physical one, but a consciousness-oriented one.  Quantum Mechanics has shown us that objective reality doesn’t exist.  Scientists are so frightened by the implications of this that they are for the most part in complete denial.  But the construct of reality is looking more and more like it is simply data.  And the evidence is overwhelming that consciousness is controlling the body and not emerging from it.  As individuals are beginning to understand this, they are beginning to recognize that they are not trapped by their bodies, nor this apparent physical reality.

Think about this from the perspective of the evolution of humanity.  If this trend continues, why will we even need the body?

Robert Monroe experienced a potential future (1000 years hence), which may be very much in line with the mega-trends that I have been discussing on theuniversesolved.com: “No sound, it was NVC [non-vocal communication]! We made it! Humans did it! We made the quantum jump from monkey chatter and all it implied.” (“Far Journeys“)

earthWe may continue to use the (virtual) physical reality as a “learning lab”, but since we won’t really need it, neither will we need the full energy of the virtual star.  And we can let virtual earth get back to the beautiful virtual place it once was.

THIS is why astronomers are not finding any sign of intelligent life in outer space, no matter what tools they use.  A sufficiently advanced civilization does not communicate using monkey chatter, nor any technological carrier like radio waves.

They use consciousness.

So will we, some day.

Grand Unified Humanity Theory

OK, maybe this post is going to be a little silly – apologies in advance.  I’m in that kind of mood.

Physicists recently created a fascinating concoction – a Bose-Einstein condensate (BEC) that was stable at a temperature 50% higher than critical.  Check out this phys.org article with the deets.  In this bizarre state of matter, all particles act in unison, entangled, as if they were collectively a single particle.  Back in Einstein’s day, BECs were envisioned to be composed of bosons.  Later, theory predicted and experiments demonstrated fermions, and ultimately, atoms.

bose185A comparison is made to an analogous process of getting highly purified water to exist at temperatures above boiling point.  It seems that phase transitions of various types can be pushed beyond their normal critical point if the underlying material is “special” in some way – pure, balanced, coherent.

Superfluids.  Laser light.

It reminds me of the continuous advances in achieving superlative or “perfect” conditions, like superconductivity (zero resistance) at temperatures closer and closer to room.  I then think of a characteristic that new agers ascribe to physical matter – “vibrational levels.”

Always connecting dots, sometimes finding connections that shouldn’t exist.

Given the trend of raising purity, alignment, and coherence in conditions closer and closer to “normal” transitions and scales, might we someday see entangled complex molecules, like proteins?  BECs of DNA strands?

Why stop there?  Could I eventually be my own BEC?  A completely coherent vibrationally-aligned entity?  Cool.  I’ll bet I would be transparent and could walk through doors.

And what if science could figure out how to create a BEC out of all living things?  Nirvana.  Reconnecting with the cosmic consciousness.

Grand Unified Humanity Theory.

The Digital Reality Bandwagon

I tend to think that reality is just data.  That the fundamental building blocks of matter and space will ultimately be shown to be bits, nothing more.  Those who have read my book, follow this blog, or my Twitter feed, realize that this has been a cornerstone of my writing since 2006.

Not that I was the first to think of any of this.  Near as I can tell, Phillip K. Dick may deserve that credit, having said “We are living in a computer programmed reality” in 1977, although I am sure that someone can find some Shakespearean reference to digital physics (“O proud software, that simulates in wanton swirl”).

Still, a mere six years ago, it was a lonely space to be in.  The few digital reality luminaries at that time included:

But since then…

– MIT Engineering Professor Seth Lloyd published “Programming the Universe” in 2006, asserting that the universe is a massive quantum computer running a cosmic program.

– Nuclear physicist Thomas Campbell published his excellent unifying theory “My Big TOE” in 2007.

– Brian Whitworth, PhD. authored a paper containing evidence that our reality is programmed: “The emergence of the physical world from information processing”, Quantum Biosystems 2010, 2 (1) 221-249  http://arxiv.org/abs/0801.0337

– University of Maryland physicist, Jim Gates, discovered error-correction codes in the laws of physics. See “Symbols of Power”, Physics World, Vol. 23, No 6, June 2010.

– Fermilab astrophysicist, Craig Hogan, speculated that space is quantized.  This was based on results from GEO600 measurements in 2010.  See: http://www.wired.com/wiredscience/2010/10/holometer-universe-resolution/.  A holometer experiment is being constructed to test: http://holometer.fnal.gov/

– Rich Terrile, director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory, hypothesized that we are living in a simulated reality. http://www.vice.com/read/whoa-dude-are-we-inside-a-computer-right-now-0000329-v19n9

– Physicists Leonard Susskind ad Gerard t’Hooft, developed the holographic black hole physics theory (our universe is digitally encoded on the surface of a black hole).

Even mainstream media outlets are dipping a toe into the water to see what kinds of reactions they get, such as this recent article in New Scientist Magazine: http://www.newscientist.com/article/mg21528840.800-reality-is-everything-made-of-numbers.html

So, today, I feel like I am in really great company and it is fun to watch all of the futurists, philosophers, and scientists jump on the new digital reality bandwagon.  The plus side will include the infusion of new ideas and the resulting synthesis of theory, as well as pushing the boundaries of experimental validation.  The down side will be all of the so-called experts jockeying for position.  In any case, it promises to be a wild ride, one that should last the twenty or so years it will take to create the first full-immersion reality simulation.  Can’t wait.

The Observer Effect and Entanglement are Practically Requirements of Programmed Reality

Programmed Reality has been an incredibly successful concept in terms of explaining the paradoxes and anomalies of Quantum Mechanics, including non-Reality, non-Locality, the Observer Effect, Entanglement, and even the Retrocausality of John Wheeler’s Delayed Choice Quantum Eraser experiment.

I came up with those explanations by thinking about how Programmed Reality could explain such curiosities.

But I thought it might be interesting to view the problem in the reverse manner.  If one were to design a universe-simulating Program, what kinds of curiosities might result from an efficient design?  (Note: I fully realize that any entity advanced enough to simulate the universe probably has a computational engine that is far more advanced that we can even imagine; most definitely not of the von-Neumann variety.  Yet, we can only work with what we know, right?)

So, if I were to create such a thing, for instance, I would probably model data in the following manner:

For any space unobserved by a conscious entity, there is no sense in creating the reality for that space in advance.  It would unnecessarily consume too many resources.

For example, consider the cup of coffee on your desk.  Is it really necessary to model every single subatomic particle in the cup of coffee in order to interact with it in the way that we do?  Of course not.  The total amount of information contained in that cup of coffee necessary to stimulate our senses in the way that it does (generate the smell that it does; taste the way it does; feel the way it does as we drink it; swish around in the cup the way that it does; have the little nuances, like tiny bubbles, that make it look real; have the properties of cooling at the right rate to make sense, etc.) might be 10MB or so.  Yet, the total potential information content in a cup of coffee is 100,000,000,000 MB, so there is a ratio of perhaps 100 trillion in compression that can be applied to an ordinary object.

But once you decide to isolate an atom in that cup of coffee and observe it, the Program would then have to establish a definitive position for that atom, effectively resulting in the collapse of the wave function, or decoherence.  Moreover, the complete behavior of the atom, at that point, might be forever under control of the program.  After all, why delete the model once observed, in the event (probably fairly likely) that it will be observed again at some point in the future.  Thus, the atom would have to be described by a finite state machine.  It’s behavior would be decided by randomly picking values of the parameters that drive that behavior, such as atomic decay.  In other words, we have created a little mini finite state machine.

So, the process of “zooming in” on reality in the Program would have to result in exactly the type of behavior observed by quantum physicists.  In other words, in order to be efficient, resource-wise, the Program decoheres only the space and matter that it needs to.

Let’s say we zoom in on two particles at the same time; two that are in close proximity to each other.  Both would have to be decohered by the Program.  The decoherence would result in the creation of two mini finite state machines.  Using the same random number seed for both will cause the state machines to forever behave in an identical manner.

No matter how far apart you take the particles.  i.e…

Entanglement!

So, Observer Effect and Entanglement might both be necessary consequences of an efficient Programmed Reality algorithm.

 

coffee185 entanglement185

Time to Revise Relativity?: Part 2

In “Time to Revise Relativity: Part 1”, I explored the idea that Faster than Light Travel (FTL) might be permitted by Special Relativity without necessitating the violation of causality, a concept not held by most mainstream physicists.

The reason this idea is not well supported has to do with the fact that Einstein’s postulate that light travels the same speed in all reference frames gave rise to all sorts of conclusions about reality, such as the idea that it is all described by a space-time that has fundamental limits to its structure.  The Lorentz factor is a consequence of this view of reality, and so it’s use is limited to subluminal effects and is undefined in terms of its use in calculating relativistic distortions past c.

Lorentz Equation

So then, what exactly is the roadblock to exceeding the speed of light?

Yes, there may be a natural speed limit to the transmission of known forces in a vacuum, such as the electromagnetic force.  And there may certainly be a natural limit to the speed of an object at which we can make observations utilizing known forces.  But, could there be unknown forces that are not governed by the laws of Relativity?

The current model of physics, called the Standard Model, incorporates the idea that all known forces are carried by corresponding particles, which travel at the speed of light if massless (like photons and gluons) or less than the speed of light if they have mass (like gauge bosons), all consistent with, or derived from the assumptions of relativity.  Problem is, there is all sorts of “unfinished business” and inconsistencies with the Standard Model.  Gravitons have yet to be discovered, Higgs bosons don’t seem to exist, gravity and quantum mechanics are incompatible, and many things just don’t have a place in the Standard Model, such as neutrino oscillations, dark energy, and dark matter.  Some scientists even speculate that dark matter is due to a flaw in the theory of gravity.  So, given the incompleteness of that model, how can anyone say for certain that all forces have been discovered and that Einstein’s postulates are sacrosanct?

Given that barely 100 years ago we didn’t know any of this stuff, imagine what changes to our understanding of reality might happen in the next 100 years.  Such as these Wikipedia entries from the year 2200…

–       The ultimate constituent of matter is nothing more than data

–       A subset of particles and corresponding forces that are limited in speed to c represent what used to be considered the core of the so-called Standard Model and are consistent with Einstein’s view of space-time, the motion of which is well described by the Special Theory of Relativity.

–       Since then, we have realized that Einsteinian space-time is an approximation to the truer reality that encompasses FTL particles and forces, including neutrinos and the force of entanglement.  The beginning of this shift in thinking occurred due to the first superluminal neutrinos found at CERN in 2011.

So, with that in mind, let’s really explore a little about the possibilities of actually cracking that apparent speed limit…

For purposes of our thought experiments, let’s define S as the “stationary” reference frame in which we are making measurements and R as the reference frame of the object undergoing relativistic motion with respect to S.  If a mass m is traveling at c with respect to S, then measuring that mass in S (via whatever methods could be employed to measure it; energy, momentum, etc.) will give an infinite result.  However, in R, the mass doesn’t change.

What if m went faster than c, such as might be possible with a sci-fi concept like a “tachyonic afterburner”?  What would an observer at S see?

Going by our relativistic equations, m now becomes imaginary when measured from S because the argument in the square root of the mass correction factor is now negative.  But what if this asymptotic property really represents more of an event horizon than an impenetrable barrier?  A commonly used model for the event horizon is the point on a black hole at which gravity prevents light from escaping.  Anything falling past that point can no longer be observed from the outside.  Instead it would look as if that object froze on the horizon, because time stands still there.  Or so some cosmologists say.  This is an interesting model to apply to the idea of superluminality as mass m continues to accelerate past c.

From the standpoint of S, the apparent mass is now infinite, but that is ultimately based on the fact that we can’t perceive speeds past c.  Once something goes past c, one of two things might happen.  The object might disappear from view due to the fact that the light that it generated that would allow us to observe it can’t keep up with its speed.  Alternatively, invoking the postulate that light speed is the same in all reference frames, the object might behave like it does on the event horizon of the black hole – forever frozen, from the standpoint of S, with the properties that it had when it hit light speed.  From R, everything could be hunky dory.  Just cruising along at warp speed.  No need to say that it is impossible because mass can’t exceed infinity, because from S, the object froze at the event horizon.  Relativity made all of the correct predictions of properties, behavior, energy, and mass prior to light speed.  Yet, with this model, it doesn’t preclude superluminality.  It only precludes the ability to make measurements beyond the speed of light.

That is, of course, unless we can figure out how to make measurements utilizing a force or energy that travels at speeds greater than c.  If we could, those measurements would yield results with correction factors only at speeds relatively near THAT speed limit.

Let’s imagine an instantaneous communication method.  Could there be such a thing?

One possibility might be quantum entanglement.  John Wheeler’s Delayed Choice Quantum Eraser experiment seems to imply non-causality and the ability to erase the past.  Integral to this experiment is the concept of entanglement.  So perhaps it is not a stretch to imagine that entanglement might embody a communication method that creates some strange effects when integrated with observational effects based on traditional light and sight methods.

What would the existence of that method do to relativity?   Nothing, according to the thought experiments above.

There are, however, some relativistic effects that seem to stick, even after everything has returned to the original reference frame.  This would seem to violate the idea that the existence of an instantaneous communication method invalidates the need for relativistic correction factors applied to anything that doesn’t involve light and sight.

For example, there is the very real effect that clocks once moving at high speeds (reference frame R) exhibit a loss of time once they return to the reference frame S, fully explained by time dilation effects.  It would seem that, using this effect as a basis for a thought experiment like the twin paradox, there might be a problem with the event horizon idea.  For example, let us imagine Alice and Bob, both aged 20.  After Alice travels at speed c to a star 10 light years away and returns, her age should still be 20, while Bob is now 40.  If we were to allow superluminal travel, it would appear that Alice would have to get younger, or something.  But, recalling the twin paradox, it is all about the relative observations that were made by Bob in reference frame S, and Alice, in reference frame R, of each other.  Again, at superluminal speeds, Alice may appear to hit an event horizon according to Bob.  So, she will never reduce her original age.

But what about her?  From her perspective, her trip is instantaneous due to an infinite Lorentz contraction factor; hence she doesn’t age.  If she travels at 2c, her view of the universe might hit another event horizon, one that prevents her from experiencing any Lorentz contraction beyond c; hence, her trip will still appear instantaneous, no aging, no age reduction.

So why would an actual relativistic effect like reduced aging, occur in a universe where an infinite communication speed might be possible?  In other words, what would tie time to the speed of light instead of some other speed limit?

It may be simply because that’s the way it is.  It appears that relativistic equations may not necessarily impose a barrier to superluminal speeds, superluminal information transfer, nor even acceleration past the speed of light.  In fact, if we accept that relativity says nothing about what happens past the speed of light, we are free to suggest that the observable effects freeze at c. Perhaps traveling past c does nothing more than create unusual effects like disappearing objects or things freezing at event horizons until they slow back down to an “observable” speed.  We certainly don’t have enough evidence to investigate further.

But perhaps CERN has provided us with our first data point.

Time Warp

Things We Can Never Comprehend

Have you ever wondered what we don’t know?  Or, to put it another way, how many mysteries of the universe are still to be discovered?

To take this thought a step further, have you ever considered that there may be things that we CAN’T understand, no matter how hard we try?

This idea may be shocking to some, especially to those scientists who believe that we are nearing the “Grand Unified Theory”, or “Theory of Everything” that will provide a simple and elegant solution to all forces, particles, and concepts in science.  Throughout history, the brightest of minds have been predicting the end of scientific inquiry.  In 1871, James Clerk Maxwell lamented the sentiment of the day which he represented by the statement “in a few years, all great physical constants will have been approximately estimated, and that the only occupation which will be left to men of science will be to carry these measurements to another place of decimals.”

Yet, why does it always seem like the closer we get to the answers, the more monkey wrenches get thrown in the way?  In today’s world, these include strange particles that don’t fit the model.  And dark matter.  And unusual gravitational aberrations in distant galaxies.

Perhaps we need a dose of humility.  Perhaps the universe, or multiverse, or whatever term is being used these days to denote “everything that is out there” is just too far beyond our intellectual capacity.  Before you call me out on this heretical thought, consider…

The UK’s Astronomer Royal Sir Martin Rees points out that “a chimpanzee can’t understand quantum mechanics.”  Despite the fact that Richard Feynman claimed that nobody understands quantum mechanics, as Michael Brooks points out in his recent article “The limits of knowledge: Things we’ll never understand”, no matter how hard they might try, the comprehension of something like Quantum Mechanics is simply beyond the capacity of certain species of animals.  Faced with this realization and the fact that anthropologists estimate that the most recent common ancestor of both humans and chimps (aka CHLCA) was about 6 million years ago, we can draw a startling conclusion:

There are certainly things about our universe and reality that are completely beyond our ability to comprehend!

My reasoning is as follows. Chimps are certainly at least more intelligent than the CHLCA; otherwise evolution would be working in reverse.  As an upper bound of intelligence, let’s say that CHLCA and chimps are equivalent.  Then, CHLCA was certainly not able to comprehend QM (nor relativity, nor even Newtonian physics), but upon evolving into humans over 8 million years, our new species was able to comprehend these things.  8 million years represents 0.06% of the entire age of the universe (according to what we think we know).  That means that for 99.94% of the total time that the universe and life was evolving up to the current point in time, the most advanced creature on earth was incapable of understand the most rudimentary concepts about the workings of reality and the universe.  And yet, are we to suppose that in the last 0.06% of the time, a species has evolved that can understand everything?  I’m sure you see how unlikely that is.

What if our universe was intelligently designed?  The same argument would probably hold.  For some entity to be capable of creating a universe that continues to baffle us no matter how much we think we understand, that entity must be far beyond our intelligence, and therefore has utilized, in the design, concepts that we can’t hope to understand.

Our only chance for being supremely capable of understanding our world would lie in the programmed reality model.  If the creator of our simulation was us, or even an entity a little more advanced than us, it could lead us along a path of exploration and knowledge discovery that just always seems to be on slightly beyond our grasp.  Doesn’t that idea feel familiar?

chimpscratching185 humanscratching185

Rewriting the Past

“I don’t believe in yesterday, by the way.”
-John Lennon

The past is set in stone, right?  Everything we have learned tells us that you can not change the past, 88-MPH DeLoreans notwithstanding.

However, it would probably surprise you to learn that many highly respected scientists, as well as a few out on the fringe, are questioning that assumption, based on real evidence.

For example, leading stem cell scientist, Dr. Robert Lanza, posits that the past does not really exist until properly observed.  His theory of Biocentrism says that the past is just as malleable as the future.

Specific experiments in Quantum Mechanics appear to prove this conjecture.  In the “Delayed Choice Quantum Eraser” experiment, “scientists in France shot photons into an apparatus, and showed that what they did could retroactively change something that had already happened.” (Science 315, 966, 2007)

Paul Davies, renowned physicist from the Australian Centre for Astrobiology at Macquarie University in Sydney, suggests that conscious observers (us) can effectively reach back in history to “exert influence” on early events in the universe, including even the first moments of time.  As a result, the universe would be able to “fine-tune” itself to be suitable for life.

Prefer the Many Worlds Interpretation (MWI) of Quantum Mechanics over the Copenhagen one?  If that theory is correct, physicist Saibal Mitra from the University of Amsterdam has shown how we can change the past by forgetting.  Effectively if the collective observers memory is reset prior to some event, the state of the universe becomes “undetermined” and can follow a different path from before.  Check out my previous post on that one.

Alternatively, you can disregard the complexities of quantum mechanics entirely.  The results of some macro-level experiments twist our perceptions of reality even more.  Studies by Helmut Schmidt, Elmar Gruber, Brenda Dunne, Robert Jahn, and others have shown, for example, that humans are actually able to influence past events (aka retropsychokinesis, or RPK), such as pre-recorded (and previously unobserved) random number sequences

Benjamin Libet, pioneering scientist in the field of human consciousness at  the University of California, San Francisco is well known for his controversial experiments that seem to show reverse causality, or that the brain demonstrates awareness of actions that will occur in the near future.  To put it another way, actions that occur now create electrical brain activity in the past.

And then, of course, there is time travel.  Time travel into the future is a fact, just ask any astronaut, all of whom have traveled nanoseconds into the future as a side effect of high speed travel.  Stephen Hawking predicts much more significant time travel into the future.  In the future.  But what about the past?  Turns out there is nothing in the laws of physics that prevents it.  Theoretical physicist Kip Thorne designed a workable time machine that could send you into the past.  And traveling to the past of course provides an easy mechanism for changing it.  Unfortunately this requires exotic matter and a solution to the Grandfather paradox (MWI to the rescue again here).

None of this is a huge surprise to me, since I question everything about our conventional views of reality.  Consider the following scenario in a massively multiplayer online role playing game (MMORPG) or simulation.  The first time someone plays the game, or participates in the simulation, there is an assumed “past” to the construct of the game.  Components of that past may be found in artifacts (books, buried evidence, etc.) scattered throughout the game.  Let’s say that evidence reports that the Kalimdors and Northrendians were at war during year 1999.  But the evidence has yet to be found by a player.  A game patch could easily change the date to 2000, thereby changing the past and no one would be the wiser.  But, what if someone had found the artifact, thereby setting the past in stone.  That patch could still be applied, but it would only be effective if all players who had knowledge of the artifact were forced to forget.  Science fiction, right?  No longer, thanks to an emerging field of cognitive research.  Two years ago, scientists were able to erase selected memories in mice.  Insertion of false memories is not far behind.  This will eventually perfected, and applied to humans.

At some point in our future (this century), we will be able to snort up a few nanobots, which will archive our memories, download a new batch of memories to the starting state of a simulation, and run the simulation.  When it ends, the nanobots will restore our old memories.

Or maybe this happened at some point in our past and we are really living the simulation.  There is really no way to tell.

No wonder the past seems so flexible.

back_to_the_future_poster_224

Quantum Mechanics Anomalies – Solved!

Scientists are endlessly scratching their heads over the paradoxes presented by quantum mechanics – duality, entanglement, the observer effect, nonlocality, non-reality.  The recent cover story in New Scientist, “Reality Gap” (or “Is quantum theory weird enough for the real world?” in the online version) observes: “Our best theory of nature has no roots in reality.”

BINGO! But then they waste this accurate insight by looking for one.

Just three days later, a new article appears: “Infinite doppelgängers may explain quantum probabilities”  Browse the website or that of other popular scientific journals and you’ll find no end of esteemed physicists taking a crack at explaining the mysteries of QM.  Doppelgängers now?  Really?  I mean no disrespect to our esteemed experts, but the answer to all of your mysteries is so simple.  Take a brave step outside of your narrow field and sign up for Computer Science 101 and Information Theory 101.  And then think outside the box, if even just for a few minutes.

Every anomaly is explained, thusly:

Duality and the Observer Effect: “Double Slit Anomaly is No Mystery to Doctor PR

Entanglement: “Quantum Entanglement – Solved (with pseudocode)”

Non-Reality: “Reality Doesn’t Exist, according to the latest research

Nonlocality: “Non-locality Explained!”

Got any more anomalies?  Send them my way! Smile

realitycheck

Double Slit Anomaly is No Mystery to Doctor PR

One of the keys to understanding our reality is found in a very unusual and anomalous experiment done over 200 years ago by Thomas Young. The philosophical debate that resulted from this experiment and its successors during the quantum era of the 20th century may hold the key to understanding everything – from bona fide scientific anomalies to cold fusion and bigfoot sightings.

If you are unfamiliar with this experiment, please watch the Dr. Quantum cartoon on the Double Slit Experiment. It provides a good explanation of two paradoxes that have puzzled scientists for many years. In summary, here is the conundrum:

1. If you fire electrons at a screen through a single slit in an otherwise impenetrable barrier, there will be a resulting pattern on the screen as you might expect – a single band of points.

2. If you fire electrons at a screen through a barrier with two slits, the pattern that will build up on the screen is not one of two bands of points, but rather an entire interference pattern, as if the electrons were actually waves instead of particles.

This is one paradox – that electrons (and all other particles) have dual personalities in that they can act like both waves and particles. Further, the personality that emerges matches the type of experiment that you are doing. If you are testing to see if the electron acts like a particle, it will. If you are testing to see if the electron acts like a wave, it will.

3. Even if the electrons are fired one at a time, eliminating the possibility of electrons interfering with each other, over time, the same pattern emerges.

4. If you put a measuring device at the slit, thereby observing which slit each electron passes through, the interference pattern disappears.

This is the more mysterious paradox – that the mere act of observation changes the result of the experiment. The implications of this are huge because they imply that our conscious actions create or modify reality.

Dr. Programmed Reality will now provide the definitive explanation that Dr. Quantum could not:

1. Electrons, along with photons, all other particles, and ultimately everything, are really nothing but information. That information describes how the electron (for example) behaves under all circumstances, what probabilities it will travel in any particular direction, and how it will reveal its presence to our senses. That information, plus the rules of reality, fully determine how it can appear sometimes like a particle and sometimes like a wave. Because it is really neither – it is JUST information that is used to give us the sensory impression of one of those personalities under various circumstances. Paradox 1 solved.

2. The great cosmic Program that appears to control our reality (see my book “The Universe – Solved!” for evidence), is also fully aware of the state of consciousness of every free-willed observer in our reality. As a result, the behavior exhibited by an electron under observation can easily be made to be a function of the observation being made. Paradox 2 solved.

If you don’t believe that, here is the piece of pseudo-code that could represent the part of The Program that controls the outcomes of such experiments (each state of each object consists of all spatial coordinates, plus time, and directional vectors):

while(time!=EndTime) {

for n=1 to AllParticlesInTheUniverse {

Object=Particle(n)
CurrentState(Object)=AcquireState(Object);
ObservationState(Object)=CollectObservationalIntent(AllObservers(Object));
NextState(Object)=CalculateNextState(CurrentState(Object), ObservationState(Object));
ApplyNextState(NextState(Object));
next n
}
}

It’s all there – full control of the outcome of any experiment based on the objects under test and the observational status of all observers.  Any known quantum mechanical paradox fully explained by 1970s-vintage pseudocode without the need for the hand waving of collapsing wave functions or zillions of parallel realities.

doctorquantum

How to Walk Through a Door

I had a brainstorm the other day on how we might someday be able to walk through a door.  And I don’t mean from a metaphysical standpoint, I mean really physically walk through the door.  If you think about it, there really should be a way to make it happen.  After all, our bodies and the door are almost 100% empty space.  I would argue that Programmed Reality says it is completely empty space, but that topic will have to be for another post.

An electron, in Newtonian mechanics, can be stuck on one side of an impenetrable barrier.  In QM, however, its wave function can be partly on one side of a barrier and partly on the other side at the same time, which allows for the possibility of “tunneling,” a common effect in semiconductors.  In fact, were it not for the wave function nature of QM, transistors, and therefore cell phones, computers, satellites, and all other sorts of modern technologies would not even exist!

tunneling

Interestingly, this theory does not only apply to subatomic particles, but also to macroscopic objects like me, you, and Donald Trump’s hair.  Since our bodies are composed of particles, each of which are just wave functions, your body is simply the superposition of these zillions of wave functions, thereby creating its own “macroscopic” wave function.  Theoretically, for this reason, you have a finite probability of passing through a wooden door, much like the electron tunneling effect.  But, don’t try it.  Because, when you sum up all of your constituent particles’ wave functions, there is a mathematical tendency for the probabilities of large-scale anomalous quantum effects to be extremely small.  It is analogous to flipping pennies.  The odds that a single penny comes up heads (electron passes through the barrier) is 50-50, but the odds that 1000 pennies all come up heads (you pass through the door) is 2^^1000 (equivalent to a 1 followed by 301 zeros, an impossible to imagine large number) to 1.  And you have a helluva lot more than 1000 subatomic particles in your body.

But what if those particles in our bodies and/or the door were made to be coherent?  That is, in our penny analogy, all pennies behave the same behavior.  Impossible?  Not so fast, Einstein.  LASERs are a great example of coherence, where all photons are of the same frequency and are in phase.  Aren’t particles of matter just a different form of particle from the photons and could they be organized to be coherent as well?

Turns out that is exactly the case and it is known as Macroscopic Quantum Tunneling.  U of Illinois researchers have demonstrated such an effect with electrons (real matter) in a nanowire.  Superconductors, superfluidity, Bose–Einstein condensates are examples of properties that seem to defy conventional physics by having their constituents occupy coherent states.  Macroscopic Quantum Coherence is a predicted property, yet to be observed in the laboratory, but probably inevitable, whereby all atoms in the piece of matter observing that property are in-phase and are described by a single quantum wavefunction.  Well, that wavefunction allows for the possibility of matter being anywhere, or “tunneling” through a thin enough membrane of material.  Let’s say that, not unlike a laser, we could get all of the atoms in our bodies to be coherent.  Might it not be possible to “tunnel” through a thin membrane of coherent material?

Effectively, we would have walked through a door!

Yes, I know that all of the different atoms in our bodies might not be made to be coherent with each other.  Then again, think about radio waves of different frequencies.  In general, they can’t be in phase with each other, except at one particular point.  Fourier analysis of a waveform with a discontinuity, like a step function or a delta function, has, at the point of the discontinuity, all frequencies in phase.  Could there ultimately be a way to accomplish that with the mere several dozen atomic frequencies present in our bodies (And who cares if that stray bit of Uranium in your spleen is left behind on the other side of the door.  Would you really miss it?)  So maybe the trick is to pulse the coherence into your body just as you walk through the door.

Then there is the problem of how to get each planar sliver of your body to have the same tunneling capability sequentially.  Like, so you don’t end up with a door stuck in your chest, all Jeff Goldblum-like.  Seems to me that maybe it’s just a matter of applying continuous pulses of coherence into your body as you walk through the door.  For each planar sliver, one of the pulses will eventually make you progress to the next sliver.  Just hope the machine doesn’t break down midway through.

So, there you have it.  One, ultra high frequency multi-atomic coherence pulser.  And you’re walking through walls.

walkingthroughawall