New Hints to How our Reality is Created

There is something fascinating going on in the world, hidden deep beneath the noise of Trump, soccer matches, and Game of Thrones. It is an exploration into the nature of reality – what is making the world tick?

To cut to the chase, it appears that our reality is being dynamically generated based on an ultra-sophisticated algorithm that takes into account not just the usual cause/effect context (as materialists believe), and conscious observation and intent (as idealists believe), but also a complex array of reality configuration probabilities so as to be optimally efficient.

Wait, what?

This philosophical journey has its origins in the well-known double slit experiment, originally done by Thomas Young in 1801 to determine that light had wavelike properties. In 1961, the experiment was performed with electrons, which also showed wavelike properties. The experimental setup involved shooting electrons through a screen containing two thin vertical slits. The wave nature of the particles was manifested in the form of an interference pattern on a screen that was placed on the other side of the double slit screen. It was a curious result but confirmed quantum theory. In 1974, the experiment was performed one electron at a time, with the same resulting interference pattern, which showed that it was not the electrons that interfered with each other, but rather a probabilistic spatial distribution function that was followed by the pattern on the screen. Quantum theory predicted that if a detector was placed at each of the slits so as to determine which slit each electron would go through, the interference pattern would disappear and just leave two vertical lines, due to the quantum complementarity principle. This was difficult to create in the lab, but experiments in the 1980s confirmed expectations – that the “which way did the particle go” measurement killed the interference pattern. The mystery was that the mere act of observation seemed to change the results of the experiment.

So, at this point, people who were interested in how the universe works effectively split into two camps, representing two fundamental philosophies that set the foundation for thinking, analysis, hypothesis, and theorizing:

  1. Objective Materialism
  2. Subjective Idealism

A zillion web pages can be found for each category.

The problem is that most scientists, and probably at least 99% of all outspoken science trolls believe in Materialism.  And “believe” is the operative word.  Because there is ZERO proof that Materialism is correct.  Nor is there proof that Idealism is correct.  So, “believe” is all that can be done.  Although, as the massive amount of evidence leans in favor of Idealism, it is fair to say that those believers at least have the scientific method behind them, whereas materialists just have “well gosh, it sure seems like we live in a deterministic world.” What is interesting is that Materialism can be falsified, but I’m not sure that Idealism can be.  The Materialist camp had plenty of theories to explain the paradox of the double slit experiments – alternative interpretations of quantum mechanics, local hidden variables, non-local hidden variables, a variety of loopholes, or simply the notion that the detector took energy from the particles and impacted the results of the experiment (as has been said, when you put a thermometer in a glass of water, you aren’t measuring the temperature of the water, you are measuring the temperature of the water with a thermometer in it.)

Over the years, the double-slit experiment has been progressively refined to the point where most of the materialistic arguments have been eliminated. For example, there is now the delayed choice quantum eraser experiment, which puts the “which way” detectors after the interference screen, making it impossible for the detector to physically interfere with the outcome of the experiment. And, one by one, all of the hidden variable possibilities and loopholes have been disproven. In 2015, several experiments were performed independently that closed all loopholes simultaneously with both photons and electrons. Since all of these various experimental tests over the years have shown that objective realism is false and non-local given the experimenters choices, the only other explanation could be what John Bell called Super-determinism, a universe completely devoid of free will, running like clockwork playing out a fully predetermined script of events. If true, this would bring about the extremely odd result that the universe is set up to ensure that the outcomes of these experiments imply the opposite to how the universe really works. But I digress…

The net result is that Materialism-based theories on reality are being chipped away experiment by experiment.  Those that believe in Materialist dogma are finding themselves being painted into an ever-shrinking philosophical corner. But Idealism-based theories are huge with possibilities, very few of which have been falsified experimentally.

Physicist and fellow digital philosopher, Tom Campbell, has boldly suggested a number of double slit experiments that can probe the nature of reality a little deeper. Tom, like me, believes that consciousness plays a key role in the nature of and creation of our reality. So much so that he believes that the outcome of the double slit experiments is due strictly to the conscious observation of the which-way detector data. In other words, if no human (or “sufficiently conscious” entity) observes the data, the interference pattern should remain. Theoretically, one could save the data to a file, store the file on a disk, hide the disk in a box and the interference pattern would remain on the screen. Open the box a day later and the interference pattern should automatically disappear, effectively rewriting history with the knowledge of the paths of the particles. His ideas have incurred the wrath of the physics trolls, who are quick to point out that regardless of the fact that humans ever read the data, the interference pattern is gone if the detectors record the data. The data can be destroyed, or not even written to a permanent medium, and the interference pattern would be gone. If these claims are true, it does not prove Materialism at all. But it does infer something very interesting.

From this and many many other categories of evidence it is strongly likely that our reality is dynamically being generated. Quantum entanglement, quantum zeno effect, and the observer effect all look very much like artifacts of an efficient system that dynamically creates reality as needed. It is the “as needed” part of this assertion that is most interesting. I shall refer to that which creates reality as “the system.”

Entanglement happens because when a two-particle-generating event occurs, it is efficient to create two particles using the same instance of a finite state machine and, therefore, when it is needed to determine the properties of one, the properties of the other are automatically known, as detailed in my blog post on entanglement. The quantum zeno effect happens because it is more efficient to reset the probability function each time an observation is made, as detailed in my blog post on quantum zeno. And so what about the double slit mystery? To illuminate, see the diagram below.

If the physicists are right, reality comes into existence at point 4 in the diagram. Why would that be? The paths of the particles are apparently not needed for the experience of the conscious observer, but rather to satisfy the consistency of the experiment. The fact that the detector registers the data is enough to create the reality. Perhaps the system “realizes” that it is less efficient to leave hanging experiments all over the place until a human “opens the envelope” than it is to instantiate real electron paths despite the unlikely possibility of data deletion. Makes logical sense to me. But it also indicates a sophisticated awareness of all of the probabilities of how the reality can play out out vis a vis potential human interactions.

The system is really smart.

Collapsing the Objective Collapse Theory

When I was a kid, I liked to collect things – coins, baseball cards, leaves, 45s, what have you. What made the category of collectible particularly enjoyable was the size and variety of the sample space. In my adult years, I’ve learned that collections have a downside – where to put everything? – especially as I continue to downsize my living space in trade for more fun locales, greater views, and better access to beaches, mountains, and wine bars. However, I do still sometimes maintain a collection, such as my collection of other people’s theories that attempt to explain quantum mechanics anomalies without letting go of objective materialism. Yeah, I know, not the most mainstream of collections, and certainly nothing I can sell on eBay, but way more fun than stamps.

The latest in this collection is a set of theories called “objective collapse” theories. These theories try to distance themselves from the ickyness (to materialists) of conscious observer-centric theories like the Copenhagen interpretation of quantum mechanics. They also attempt to avoid the ridiculousness of the exponentially explosive reality creation theories in the Many Worlds Interpretations (MWI) category. Essentially, the Objective Collapsers argue that there is a wave function describing the probabilities of properties of objects, but, rather than collapsing due to a measurement or a conscious observation, it collapses on its own due to some as yet undetermined, yet deterministic, process according to probabilities of the wave function.

Huh?

Yeah, I call BS on that. And point simply to the verification of the Quantum Zeno effect.  Particles don’t change state while they are under observation. When you stop observing them, then they change state, not at some random time prior, as the Objective Collapse theories would imply, but at the exact time that you stop observing them. In other words, the timing of the observation is correlated with wave function collapse, completely undermining the argument that it is probabilistic or deterministic according to some hidden variables. Other better-physics-educated individuals than I (aka physicists) have also called BS on Objective Collapse theories due to other things such as the conservation of energy violations. But, of course there is no shortage of physicists calling BS on other physicists’ theories. That, by itself, would make an entertaining collection.

In any case, I would be remiss if I didn’t remind the readers that the Digital Consciousness Theory completely explains all of this stuff. By “stuff,” I mean not just the anomalies, like the quantum zeno effect, entanglement, macroscopic coherence, the observer effect, and quantum retrocausality, but also the debates about microscopic vs. macroscopic, and thought experiments like the time that Einstein asked Abraham Pais whether he really believed that the moon existed only when looked at, to wit:

  • All we can know for sure is what we experience, which is subjective for every individual.
  • We effectively live in a virtual reality, operating in the context of a huge and highly complex digital substrate system. The purpose of this reality is for our individual consciousnesses to learn and evolve and contribute to the greater all-encompassing consciousness.
  • The reason that it feels “physical” or solid and not virtual is due to the consensus of experience that is built into the system.
  • This virtual reality is influenced and/or created by the conscious entities that occupy it (or “live in it” or “play in it”; chose your metaphor)
  • The virtual reality may have started prior to any virtual life developing, or it may have been suddenly spawned and initiated with us avatars representing the various life forms at any point in the past.
  • Some things in the reality need to be there to start; the universe, earth, water, air, and, in the case of the more recent invocation of reality, lots of other stuff. These things may easily be represented in a macroscopic way, because that is all that is needed in the system for the experience. Therefore, there is no need for us to create them.
  • However, other things are not necessary for our high level experience. But they are necessary once we probe the nature of reality, or if we aim to influence our reality. These are the things that are subject to the observer effect. They don’t exist until needed. Subatomic particles and their properties are perfect examples. As are the deep cause and effect relationships between reality elements that are necessary to create the changes that our intent is invoked to bring about.

So there is no need for objective collapse. Things are either fixed (the moon) or potential (the radioactive decay of a particle). The latter are called into existence as needed…

…Maybe

cat

Comments on the Possibilist Transactional Interpretation of Quantum Mechanics, aka Models vs. Reality

Reality is what it is. Everything else is just a model.

From Plato to Einstein to random humans like myself, we are all trying to figure out what makes this world tick. Sometimes I think I get it pretty well, but I know that I am still a product of my times, and therefore my view of reality is seen through the lens of today’s technology and state of scientific advancement. As such, I would be a fool to think that I have it all figured out. As should everyone else.

At one point in our recent past, human scientific endeavor wasn’t so humble. Just a couple hundred years ago, we thought that atoms were the ultimate building blocks of reality and everything could be ultimately described by equations of mechanics. How naïve that was, as 20th century physics made abundantly clear. But even then, the atom-centric view of physics was not reality. It was simply a model. So is every single theory and equation that we use today, regardless of whether it is called a theory or a law: Relativistic motion, Schrodinger’s equation, String Theory, the 2nd Law of Thermodynamics – all models of some aspect of reality.

We seek to understand our world and derive experiments that push forward that knowledge. As a result of the experiments, we define models to best fit the data.

One of the latest comes from quantum physicist Ruth Kastner in the form of a model that better explains the anomalies of quantum mechanics. She calls the model the Possibilist Transactional Interpretation of Quantum Mechanics (PTI), an updated version of John Cramer’s Transactional Interpretation of Quantum Mechanics (TIQM, or TI for short) proposed in 1986. The transactional nature of the theory comes from the idea that the wavefunction collapse behaves like a transaction in that there is an “offer” from an “emitter” and a “confirmation” from an “absorber.” In the PTI enhancement, the offers and confirmations are considered to be outside of normal spacetime and therefore the wavefunction collapse creates spacetime rather than occurs within it. Apparently, this helps to explain some existing anomalies, like uncertainty and entanglement.

This is all cool and seems to serve to enhance our understanding of how QM works. However, it is STILL just a model, and a fairly high level one at that. And all models are approximations, approximating a description of reality that most closely matches experimental evidence.

Underneath all models exist deeper models (e.g. string theory), many as yet to be supported by real evidence. Underneath those models may exist even deeper models. Consider this layering…

Screen Shot 2015-09-29 at 8.18.55 PM

Every layer contains models that may be considered to be progressively closer to reality. Each layer can explain the layer above it. But it isn’t until you get to the bottom layer that you can say you’ve hit reality. I’ve identified that layer as “digital consciousness”, the working title for my next book. It may also turn out to be a model, but it feels like it is distinctly different from the other layers in that, by itself, it is no longer an approximation of reality, but rather a complete and comprehensive yet elegantly simple framework that can be used to describe every single aspect of reality.

For example, in Digital Consciousness, everything is information. The “offer” is then “the need to collapse the wave function based on the logic that there is now an existing conscious observer who depends on it.” The “confirmation” is the collapse – the decision made from probability space that defines positions, spins, etc. This could also be seen as the next state of the state machine that defines such behavior. The emitter and absorber are both parts of the “system”, the global consciousness that is “all that there is.” So, if experimental evidence ultimately demonstrates that PTI is a more accurate interpretation of QM, it will nonetheless still be a model and an approximation. The bottom layer is where the truth is.

Elvidge’s Postulate of Countable Interpretations of QM…

The number of intepretations of Quantum Mechanics always exceeds the number of physicists.

Let’s count the various “interpretations” of quantum mechanics:

  • Bohm (aka Causal, or Pilot-wave)
  • Copenhagen
  • Cosmological
  • Ensemble
  • Ghirardi-Rimini-Weber
  • Hidden measurements
  • Many-minds
  • Many-worlds (aka Everett)
  • Penrose
  • Possibilist Transactional (PTI)
  • Relational (RQM)
  • Stochastic
  • Transactional (TIQM)
  • Von Neumann-Wigner
  • Digital Consciousness (DCI, aka Elvidge)

Unfortunately you won’t find the last one in Wikipedia. Give it about 30 years.

istock_000055801128_small-7dde4d0d485dcfc0d5fe4ab9e600bfef080121d0-s800-c85

Quantum Zeno Effect Solved

Lurking amidst the mass chaos of information that exists in our reality is a little gem of a concept called the Quantum Zeno Effect.  It is partially named after ancient Greek philosopher Zeno of Elea, who dreamed up a number of paradoxes about the fluidity of motion and change.  For example, the “Arrow Paradox” explores the idea that if you break down time into “instants” of zero duration, motion cannot be observed.  Thus, since time is composed of a set of instants, motion doesn’t truly exist.  We might consider Zeno to have been far ahead of his time as he appeared to be thinking about discrete systems and challenging the continuity of space and time a couple thousand years before Alan Turing resurrected the idea in relation to quantum mechanics: “It is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, one second, tends to one as N tends to infinity; that is, that continual observations will prevent motion …”.  The term “Quantum Zeno Effect” was first used by physicists George Sudarshan and Baidyanath Misra in 1977 to describe just such a system – one that does not change state because it is continuously observed.

The challenge with this theory has been in devising experiments that can verify or falsify it.  However, technology has caught up to philosophy and, over the last 25 years, a number of experiments have been performed which seem to validate the effect.  In 2001, for example, physicist Mark Raizen and a team at the University of Texas showed that the effect is indeed real and the transition of states in a system can be either slowed down or sped up simply by taking measurements of the system.

I have enjoyed making a hobby of fully explaining quantum mechanics anomalies with the programmed reality theory.   Admittedly, I don’t always fully grasp some of the deep complexities and nuances of the issues that I am tackling, due partly to the fact that I have a full time job that has naught to do with this stuff, and partly to the fact that my math skills are a bit rusty, but thus far, it doesn’t seem to make a difference.  The more I dig in to each issue, the more I find things that simply support the idea that we live in a digital (and programmed) reality.

The quantum Zeno effect might not be observed in every case.  It only works for non-memoryless processes.  Exponential decay, for instance, is an example of a memoryless system.  Frequent observation of a particle undergoing radioactive decay would not affect the result.  [As an aside, I find it very interesting that a “memoryless system” invokes the idea of a programmatic construct.  Perhaps with good reason…]

A system with memory, or “state”, however, is, in theory, subject to the quantum Zeno effect.  It will manifest itself by appearing to reset the experiment clock every time an observation is made of the state of the system.  The system under test will have a characteristic set of changes that vary over time.  In the case of the University of Texas experiment, trapped ions tended to remain in their initial state for a brief interval or so before beginning to change state via quantum tunneling, according to some probability function.  For the sake of developing a clear illustration, let’s imagine a process whereby a particle remains in its initial quantum state (let’s call it State A) for 2 seconds before probabilistically decaying to its final state (B) according to a linear function over the next second.  Figure A shows the probability of finding the particle in State A as a function of time.  For the first 2 seconds, of course, it has a 0% probability of changing state, and between 2 and 3 seconds it has an equal probability of moving to state B at any point in time.  A system with this behavior, left on its own and measured at any point after 3 seconds, will be in State B.

probability

What happens, however, when you make a measurement of that system, to check and see if it changed state, at t=1 second?  Per the quantum Zeno effect, the experiment clock will effectively be reset and now the system will stay in State A from t=1 to t=3 and then move to state B at some point between t=3 and t=4.  If you make another measurement of the system at t=1, the clock will again reset, delaying the behavior by another second.  In fact, if you continue to measure the state of the system every second, it will never change state.  Note that this has absolutely nothing to do with the physical impact of the measurement itself; a 100% non-intrusive observation will have exactly the same result.

Also note that, it isn’t that the clock doesn’t reset for a memoryless system, but rather, that it doesn’t matter because you cannot observe any difference.  One may argue that if you make observations at the Planck frequency (one per jiffy), even a memoryless sytem might never change state.  This actually approaches the true nature of Zeno’s arguments, but that is a topic for another essay, one that is much more philosophical than falsifiable.  In fact, “Quantum Zeno Effect” is a misnomer.  The non-memoryless system described above really has little to do with the ad infinitum inspection of Zeno’s paradoxes, but we are stuck with the name.  And I digress.

So why would this happen?

It appears to be related in some way to the observer effect and to entanglement:

  • Observer Effect – Once observed, the state of a system changes.
  • Entanglement – Once observed, the states of multiple particles (or, rather, the state of a system of multiple particles) are forever connected.
  • Quantum Zeno – Once observed, the state of a system is reset.

What is common to all three of these apparent quantum anomalies is the coupling of the act of observation with the concept of a state.  For the purposes of this discussion, it will be useful to invoke the computational concept of a finite state machine, which is a system that changes state according to a set of logic rules and some input criteria.

I have explained the Observer effect and Entanglement as logical necessities of an efficient programmed reality system.  What about Quantum Zeno?  Why would it not be just as efficient to start the clock on a process and let it run, independent of observation?

A clue to the answer is that the act of observation appears to create something.

In the Observer effect, it creates the collapse of the probability wave functions and the establishment of definitive properties of certain aspects of the system under observation (e.g. position).  This is not so much a matter of efficiency as it is of necessity, because without probability, free will doesn’t exist and without free will, we can’t learn, and if the purpose of our system is to grow and evolve, then by necessity, observation must collapse probability.

In Entanglement, the act of observation may create the initiation of a state machine, which subsequently determines the behavior of the particles under test.  Those particles are just data, as I have shown, and the data elements are part of the same variable space of the state machine.  They both get updated simultaneously, regardless of the “virtual” distance between them.

So, in Quantum Zeno, the system under test is in probability space.  The act of observation “collapses” this initial probability function and kicks off the mathematical process by which futures states are determined based on the programmed probability function.  But that is now a second level of probability function; call it probability function 2.  Observing this system a second time now must collapse the probability wave function 2.  But to do so means that the system would now have to calculate a modified probability function 3 going forward – one that takes into account the fact that some aspect of the state machine has already been determined (e.g. the system has or hasn’t started its decay).  For non-memoryless systems, this could be an arbitrarily complex function (3) since it may take a different shape for every time at which the observation occurs.  A third measurement complicates the function even further because even more states are ruled out.

On the other hand, it would be more efficient to simply reset the probability function each time an observation is made, due to the efficiency of the reality system.

The only drawback to this algorithm is the fact that smart scientists are starting to notice these little anomalies, although the assumption here is that the reality system “cares.”  It may not.  Or perhaps that is why most natural processes are exponential, or memoryless – it is a further efficiency of the system.  Man-made experiments, however, don’t follow the natural process and may be designed to be arbitrarily complex, which ironically serves to give us this tiny little glimpse into the true nature of reality.

What we are doing here is inferring deep truths about our reality that are in fundamental conflict with the standard materialist view.  This will be happening more and more as time goes forward and physicists and philosophers will soon have no choice but to consider programmed reality as their ToE.

matrix-stoppingreality

The Observer Effect and Entanglement are Practically Requirements of Programmed Reality

Programmed Reality has been an incredibly successful concept in terms of explaining the paradoxes and anomalies of Quantum Mechanics, including non-Reality, non-Locality, the Observer Effect, Entanglement, and even the Retrocausality of John Wheeler’s Delayed Choice Quantum Eraser experiment.

I came up with those explanations by thinking about how Programmed Reality could explain such curiosities.

But I thought it might be interesting to view the problem in the reverse manner.  If one were to design a universe-simulating Program, what kinds of curiosities might result from an efficient design?  (Note: I fully realize that any entity advanced enough to simulate the universe probably has a computational engine that is far more advanced that we can even imagine; most definitely not of the von-Neumann variety.  Yet, we can only work with what we know, right?)

So, if I were to create such a thing, for instance, I would probably model data in the following manner:

For any space unobserved by a conscious entity, there is no sense in creating the reality for that space in advance.  It would unnecessarily consume too many resources.

For example, consider the cup of coffee on your desk.  Is it really necessary to model every single subatomic particle in the cup of coffee in order to interact with it in the way that we do?  Of course not.  The total amount of information contained in that cup of coffee necessary to stimulate our senses in the way that it does (generate the smell that it does; taste the way it does; feel the way it does as we drink it; swish around in the cup the way that it does; have the little nuances, like tiny bubbles, that make it look real; have the properties of cooling at the right rate to make sense, etc.) might be 10MB or so.  Yet, the total potential information content in a cup of coffee is 100,000,000,000 MB, so there is a ratio of perhaps 100 trillion in compression that can be applied to an ordinary object.

But once you decide to isolate an atom in that cup of coffee and observe it, the Program would then have to establish a definitive position for that atom, effectively resulting in the collapse of the wave function, or decoherence.  Moreover, the complete behavior of the atom, at that point, might be forever under control of the program.  After all, why delete the model once observed, in the event (probably fairly likely) that it will be observed again at some point in the future.  Thus, the atom would have to be described by a finite state machine.  It’s behavior would be decided by randomly picking values of the parameters that drive that behavior, such as atomic decay.  In other words, we have created a little mini finite state machine.

So, the process of “zooming in” on reality in the Program would have to result in exactly the type of behavior observed by quantum physicists.  In other words, in order to be efficient, resource-wise, the Program decoheres only the space and matter that it needs to.

Let’s say we zoom in on two particles at the same time; two that are in close proximity to each other.  Both would have to be decohered by the Program.  The decoherence would result in the creation of two mini finite state machines.  Using the same random number seed for both will cause the state machines to forever behave in an identical manner.

No matter how far apart you take the particles.  i.e…

Entanglement!

So, Observer Effect and Entanglement might both be necessary consequences of an efficient Programmed Reality algorithm.

 

coffee185 entanglement185

Time to Revise Relativity?: Part 2

In “Time to Revise Relativity: Part 1”, I explored the idea that Faster than Light Travel (FTL) might be permitted by Special Relativity without necessitating the violation of causality, a concept not held by most mainstream physicists.

The reason this idea is not well supported has to do with the fact that Einstein’s postulate that light travels the same speed in all reference frames gave rise to all sorts of conclusions about reality, such as the idea that it is all described by a space-time that has fundamental limits to its structure.  The Lorentz factor is a consequence of this view of reality, and so it’s use is limited to subluminal effects and is undefined in terms of its use in calculating relativistic distortions past c.

Lorentz Equation

So then, what exactly is the roadblock to exceeding the speed of light?

Yes, there may be a natural speed limit to the transmission of known forces in a vacuum, such as the electromagnetic force.  And there may certainly be a natural limit to the speed of an object at which we can make observations utilizing known forces.  But, could there be unknown forces that are not governed by the laws of Relativity?

The current model of physics, called the Standard Model, incorporates the idea that all known forces are carried by corresponding particles, which travel at the speed of light if massless (like photons and gluons) or less than the speed of light if they have mass (like gauge bosons), all consistent with, or derived from the assumptions of relativity.  Problem is, there is all sorts of “unfinished business” and inconsistencies with the Standard Model.  Gravitons have yet to be discovered, Higgs bosons don’t seem to exist, gravity and quantum mechanics are incompatible, and many things just don’t have a place in the Standard Model, such as neutrino oscillations, dark energy, and dark matter.  Some scientists even speculate that dark matter is due to a flaw in the theory of gravity.  So, given the incompleteness of that model, how can anyone say for certain that all forces have been discovered and that Einstein’s postulates are sacrosanct?

Given that barely 100 years ago we didn’t know any of this stuff, imagine what changes to our understanding of reality might happen in the next 100 years.  Such as these Wikipedia entries from the year 2200…

–       The ultimate constituent of matter is nothing more than data

–       A subset of particles and corresponding forces that are limited in speed to c represent what used to be considered the core of the so-called Standard Model and are consistent with Einstein’s view of space-time, the motion of which is well described by the Special Theory of Relativity.

–       Since then, we have realized that Einsteinian space-time is an approximation to the truer reality that encompasses FTL particles and forces, including neutrinos and the force of entanglement.  The beginning of this shift in thinking occurred due to the first superluminal neutrinos found at CERN in 2011.

So, with that in mind, let’s really explore a little about the possibilities of actually cracking that apparent speed limit…

For purposes of our thought experiments, let’s define S as the “stationary” reference frame in which we are making measurements and R as the reference frame of the object undergoing relativistic motion with respect to S.  If a mass m is traveling at c with respect to S, then measuring that mass in S (via whatever methods could be employed to measure it; energy, momentum, etc.) will give an infinite result.  However, in R, the mass doesn’t change.

What if m went faster than c, such as might be possible with a sci-fi concept like a “tachyonic afterburner”?  What would an observer at S see?

Going by our relativistic equations, m now becomes imaginary when measured from S because the argument in the square root of the mass correction factor is now negative.  But what if this asymptotic property really represents more of an event horizon than an impenetrable barrier?  A commonly used model for the event horizon is the point on a black hole at which gravity prevents light from escaping.  Anything falling past that point can no longer be observed from the outside.  Instead it would look as if that object froze on the horizon, because time stands still there.  Or so some cosmologists say.  This is an interesting model to apply to the idea of superluminality as mass m continues to accelerate past c.

From the standpoint of S, the apparent mass is now infinite, but that is ultimately based on the fact that we can’t perceive speeds past c.  Once something goes past c, one of two things might happen.  The object might disappear from view due to the fact that the light that it generated that would allow us to observe it can’t keep up with its speed.  Alternatively, invoking the postulate that light speed is the same in all reference frames, the object might behave like it does on the event horizon of the black hole – forever frozen, from the standpoint of S, with the properties that it had when it hit light speed.  From R, everything could be hunky dory.  Just cruising along at warp speed.  No need to say that it is impossible because mass can’t exceed infinity, because from S, the object froze at the event horizon.  Relativity made all of the correct predictions of properties, behavior, energy, and mass prior to light speed.  Yet, with this model, it doesn’t preclude superluminality.  It only precludes the ability to make measurements beyond the speed of light.

That is, of course, unless we can figure out how to make measurements utilizing a force or energy that travels at speeds greater than c.  If we could, those measurements would yield results with correction factors only at speeds relatively near THAT speed limit.

Let’s imagine an instantaneous communication method.  Could there be such a thing?

One possibility might be quantum entanglement.  John Wheeler’s Delayed Choice Quantum Eraser experiment seems to imply non-causality and the ability to erase the past.  Integral to this experiment is the concept of entanglement.  So perhaps it is not a stretch to imagine that entanglement might embody a communication method that creates some strange effects when integrated with observational effects based on traditional light and sight methods.

What would the existence of that method do to relativity?   Nothing, according to the thought experiments above.

There are, however, some relativistic effects that seem to stick, even after everything has returned to the original reference frame.  This would seem to violate the idea that the existence of an instantaneous communication method invalidates the need for relativistic correction factors applied to anything that doesn’t involve light and sight.

For example, there is the very real effect that clocks once moving at high speeds (reference frame R) exhibit a loss of time once they return to the reference frame S, fully explained by time dilation effects.  It would seem that, using this effect as a basis for a thought experiment like the twin paradox, there might be a problem with the event horizon idea.  For example, let us imagine Alice and Bob, both aged 20.  After Alice travels at speed c to a star 10 light years away and returns, her age should still be 20, while Bob is now 40.  If we were to allow superluminal travel, it would appear that Alice would have to get younger, or something.  But, recalling the twin paradox, it is all about the relative observations that were made by Bob in reference frame S, and Alice, in reference frame R, of each other.  Again, at superluminal speeds, Alice may appear to hit an event horizon according to Bob.  So, she will never reduce her original age.

But what about her?  From her perspective, her trip is instantaneous due to an infinite Lorentz contraction factor; hence she doesn’t age.  If she travels at 2c, her view of the universe might hit another event horizon, one that prevents her from experiencing any Lorentz contraction beyond c; hence, her trip will still appear instantaneous, no aging, no age reduction.

So why would an actual relativistic effect like reduced aging, occur in a universe where an infinite communication speed might be possible?  In other words, what would tie time to the speed of light instead of some other speed limit?

It may be simply because that’s the way it is.  It appears that relativistic equations may not necessarily impose a barrier to superluminal speeds, superluminal information transfer, nor even acceleration past the speed of light.  In fact, if we accept that relativity says nothing about what happens past the speed of light, we are free to suggest that the observable effects freeze at c. Perhaps traveling past c does nothing more than create unusual effects like disappearing objects or things freezing at event horizons until they slow back down to an “observable” speed.  We certainly don’t have enough evidence to investigate further.

But perhaps CERN has provided us with our first data point.

Time Warp

Quantum Mechanics Anomalies – Solved!

Scientists are endlessly scratching their heads over the paradoxes presented by quantum mechanics – duality, entanglement, the observer effect, nonlocality, non-reality.  The recent cover story in New Scientist, “Reality Gap” (or “Is quantum theory weird enough for the real world?” in the online version) observes: “Our best theory of nature has no roots in reality.”

BINGO! But then they waste this accurate insight by looking for one.

Just three days later, a new article appears: “Infinite doppelgängers may explain quantum probabilities”  Browse the website or that of other popular scientific journals and you’ll find no end of esteemed physicists taking a crack at explaining the mysteries of QM.  Doppelgängers now?  Really?  I mean no disrespect to our esteemed experts, but the answer to all of your mysteries is so simple.  Take a brave step outside of your narrow field and sign up for Computer Science 101 and Information Theory 101.  And then think outside the box, if even just for a few minutes.

Every anomaly is explained, thusly:

Duality and the Observer Effect: “Double Slit Anomaly is No Mystery to Doctor PR

Entanglement: “Quantum Entanglement – Solved (with pseudocode)”

Non-Reality: “Reality Doesn’t Exist, according to the latest research

Nonlocality: “Non-locality Explained!”

Got any more anomalies?  Send them my way! Smile

realitycheck

Quantum Entanglement – Solved (with pseudocode)

I am always amazed at how such bright physicists discuss scientific anomalies, like quantum entanglement, pronounce that “that’s just the way it is” and never seriously consider an obvious answer and solution to all such anomalies – namely that perhaps our reality is under programmed control.

For the quantum entanglement anomaly, I think you will see what I mean.  Imagine that our world is like a video game.  As with existing commercial games, which use “physics engines”, the players (us) are subject to the rules of physics, as are subatomic particles.  However, suppose there is a rule in the engine that says that when two particles interact, their behavior is synchronized going forward.  Simple to program.  The pseudocode would look something like:

for all particles (i)
for all particles (j)
if distance(particle.i, particle.j) < EntanglementThreshold then
Synchronize(particle.i, particle.j)
else
end if
next j
next i

After that event, at each cycle through the main program loop, whatever one particle does, its synchronized counterparts also do.  Since the program operates outside of the artificial laws of physics, those particles can be placed anywhere in the program’s reality space and they will always stay synchronized.  Yet their motion and other interactions may be subject to the usual physics engine.  This is very easy to program, and, coupled with all of the other evidence that our reality is under programmed control (the programmer is the intelligent creator), offers a perfect explanation.  More and more scientists are considering these ideas (e.g. Craig Hogan, Brian Whitworth, Andrei Linde) although the thought center is more in the fields of philosophy, computer science, and artificial intelligence.  I wonder if the reason more physicists haven’t caught on is that they fear that such concepts might make them obsolete.

They needn’t worry.  Their jobs are still to probe the workings of the “cosmic program.”

 

entanglement

Just when you thought Physics couldn’t get any Stranger

Tachyons, entanglement, cold fusion, dark matter, galactic filaments.  Just when you thought physics couldn’t get any stranger…

– THE VERY COLD: Fractional Quantum Hall Effect: When electrons are magnetically confined and cooled to a third of a degree above absolute zero (See more here), they seem to break down into sub-particles that act in synchronization, but with fractional charges, like 1/3, or 3/7.

– THE VERY HIGH PRESSURE: Strange Matter: The standard model of physics includes 6 types of quarks, including the 2 (“up” and “down”) that make up ordinary matter.  Matter that consists of “strange” quarks, aka Strange Matter, would be 10 times as heavy as ordinary matter.  Does it exist?  Theoretically, at very high densities, such as the core of neutron stars, such matter may exist.  A 1998 space shuttle experiment seems to have detected some, but repeat experiments have not yielded the same results.

– THE VERY LARGE DIMENSIONAL: Multidimensional Space: String theories say that we live in a 10-dimensional space, mostly because it is the only way to make quantum mechanics and general relativity play nicely together.  That is, until physicist Garrett Lisi came along and showed how it could be done with eight dimensional space and objects called octonions.  String theorists were miffed, mostly because Lisi is not university affiliated and spends most of his time surfing in Hawaii.

– THE VERY HOT: Quark-Gloun Plasma: Heat up matter to 2 trillion degrees and neutrons and protons fall apart into a plasma of quarks called quark-gluon plasma.  In April of 2005, QGP appeared to have been created at the Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC).

My view on all this is that it is scientific business as usual.  100 years ago, we lived in a smaller world; a world described solely by Newtonian Mechanics, our ordinary everyday view of how the world works.  Then, along came relativity and quantum mechanics.  Technological advances in laboratory equipment and optics allowed us to push the limits of speed and validate Relativity, which ultimately showed that Newtonian Mechanics was just an approximation of the larger, more encompassing theory of Relativity at slow speeds.  Similarly we pushed the limits of probing the very small and validated Quantum Mechanics, which showed that Newtonian Mechanics was just an approximation of the larger, more encompassing theory of Quantum Mechanics at large scales.  In the 1960’s, we pushed the limits of heat and energy, discovered  and found that our Quantum Mechanical / Relativistic Theory of the world was really just an approximation at low temperatures of a larger theory that had to encompass Quantum Chromodynamics.  Now, we are pushing the limits of temperature, or the slowing down of particles, and discovering that there must be an even larger theory that describes the world, that explains the appearance of fractional charges at extremely low temperatures.  Why does this keep happening and where does it end?

Programmed Reality provides an explanation.  In fact, it actually provides two.

In one case, the programmers of our reality created a complex set of physical laws that we are slowly discovering.  Imagine a set of concentric spheres, with each successive level outward representing a higher level scientific theory of the world that encompasses faster speeds, higher temperatures, larger scales, colder temperatures, higher energies, etc.  How deep inside the sphere of knowledge are we now?  Don’t know, but this is a model that puts it in perspective.  It is a technological solution to the philosophy of Deism.

The second possibility is that as we humans push the limits of each successive sphere of physical laws that were created for us, the programmers put in place a patch that opens up the next shell of discovery, not unlike a game.  I prefer this model, for a number of reasons.  First of all, wouldn’t it be a lot more fun and interesting to interact with your creations, rather than start them on their evolutionary path and then pay no further attention?  Furthermore, this theory offers the perfect explanation for all of those scientific experiments that have generated anomalous results that have never been reproducible.  The programmers simply applied the patch before anyone else could reproduce the experiment.

Interestingly, throughout the years, scientists have fooled themselves into thinking that the discovery of everything was right around the corner.  In the mid-20th century, the ultimate goal was the Unified Field Theory.  Now, it is called a TOE, or Theory of Everything.

Let’s stop thinking we’re about to reach the end of scientific inquiry and call each successive theory a TOM, or Theory of More.

Because the only true TOE is Programmed Reality.  QED.

Non-locality Explained!

A great article in Scientific American, “A Quantum Threat to Special Relativity,” is well worth the read.

Locality in physics is the idea that things are only influenced by forces that are local or nearby.  The water boiling on the stovetop does so because of the energy imparted from the flame beneath.  Even the sounds coming out of your radio are decoded from the electromagnetic disturbance in the air next to the antenna, which has been propagating from the radio transmitter at the speed of light.  But, think we all, nothing can influence anything remotely without a “chain reaction” disturbance, which according to Einstein can not exceed the speed of light.

However, says Quantum Mechanics, there is something called entanglement.  No, not the kind you had with Becky under the bleachers in high school.  This kinds of entanglement says that particles that once “interacted” are forever entangled, whereby their properties are reflected in each other’s behavior.  For example, take 2 particles that came from the same reaction and separate them by galactic distances.  What one does, the other will follow.  This has been proven to a distance of at least 18 km and seems to violate Einstein’s theory of Special Relativity.

Einstein, of course, took issue with this whole concept in his famous EPR paper, preferring to believe that “hidden variables” were responsible for the effect.  But, in 1964, physicist John Bell developed a mathematical proof that no local theory can account for all of Quantum Mechanics experimental results.  In other words, the world is non-local.  Period.  It is as if, says the SciAm article, “a fist in Des Moines can break a nose in Dallas without affecting any other physical thing anywhere in the heartand. ”  Alain Aspect later performed convincing experiments that demonstrated this non-locality.  45 years after John Bell’s proof, scientists are coming to terms with the idea that the world is non-local and special relativity has limitations.  Both ideas are mind-blowing.

But, as usual, there are a couple of clever paradigms that get around it all, each of which are equally mind-blowing.  In one, our old friend the “Many Worlds” theory, zillions of parallel universes are spawned every second, which account for the seeming non-locality of reality.  In the other, “history plays itself out not in the three-dimensional spacetime of special relativity but rather this gigantic and unfamiliar configuration space, out of which the illusion of three-dimensionality somehow emerges.”

I have no problem explaining all of these ideas via programmed reality.

Special Relativity has to do with our senses, not with reality.  True simultaneity is possible because our reality is an illusion.  And there is no speed limit in the truer underlying construct.  So particles have no problem being entangled.

Many Worlds can be implemented by multiple instances of reality processes.  Anyone familiar with computing can appreciate how instances of programs can be “forked” (in Unix parlance) or “spawned” (Windows, VMS, etc.).  You’ve probably even seen it on your buggy Windows PC, when instances of browsers keep popping up like crazy and you can’t kill the tasks fast enough and end up either doing a hard shutdown or waiting until the little bastard blue-screens.  Well, if the universe is just run by a program, why can’t the program fork itself whenever it needs to, explaining all of the mysteries of QM that can’t be explained by wave functions.

And then there is “configuration space.”  Nothing more complex than multiple instances of the reality program running, with the conscious entity having the ability to move between them, experiencing reality and all the experimental mysteries of Quantum Mechanics.

Hey physicists – get your heads out of the physics books and start thinking about computer science!

(thanks to Poet1960 for allowing me to use his great artwork)

Non-locality explained