The Ultimate Destiny of the Nature of Matter is Something Very Familiar

Extrapolation is a technique for projecting a trend into the future.  It has been used liberally by economists, futurists, and other assorted big thinkers for many years, to project population growth, food supply, market trends, singularities, technology directions, skirt lengths, and other important trends.  It goes something like this:

If a city’s population has been growing linearly by 10% per year for many years, one can safely predict that it will be around 10% higher next year, 21% higher in two years, and so on.  Or, if chip density has been increasing by a factor of 2 every two years (as it has for the past 40), one can predict that it will be 8 times greater than today in three years (Moore’s Law).  Ray Kurzweil and other Singularity fans extrapolate technology trends to conclude that our world as we know it will come to an end in 2045 in the form of a technological singularity.  Of course there are always unknown and unexpected events that can cause these predictions to be too low or too high, but given the information that is known today, it is still a useful technique.

To my knowledge, extrapolation has not really been applied to the problem that I am about to present, but I see no reason why it couldn’t give an interesting projection…

…for the nature of matter.

In ancient Greece, Democritus put forth the idea that solid objects were comprised of atoms of that element or material, either jammed tightly together, as in the case of a solid object, or separated by a void (space).  These atoms were thought to be little indivisible billiard-ball-like objects made of some sort of “stuff.”  Thinking this through a bit, it was apparent that if atoms were thought to be spherical and they were crammed together in an optimal fashion, then matter was essentially 74% of the space that it takes up, the rest being air, or empty space.  So, for example, a solid bar of gold was really only 74% gold “stuff,” at most.

That view of matter was resurrected by John Dalton in the early 1800s and revised once J. J. Thomson discovered electrons.  At that point, atoms were thought to look like plum pudding, with electrons embedded in the proton pudding.  Still, the density of “stuff” didn’t change, at least until the early 1900s when Ernest Rutherford determined that atoms were actually composed of a tiny dense nucleus and a shell of electrons.  Further measurements revealed that these subatomic particles (protons, electrons, and later, neutrons) were actually very tiny compared to the overall atom and, in fact, most of the atom was empty space.  That model, coupled with a realization that atoms in a solid actually had to have some distance between them, completely changed our view on how dense matter was.  It turned out that in our gold bar only 1 part in 10E15 was “stuff.”

That was, until the mid-60’s, when quark theory was proposed, which said that protons and neutrons were actually comprised of three quarks each.  As the theory (aka QCD) is now fairly accepted and some measurement estimates have been made of quark sizes, one can calculate that since quarks are between a thousand and a million times smaller than the subatomic particles that they make up, matter is now 10E9 to 10E18 times more tenuous than previously thought.  Hence our gold bar is now only about 1 part in 10E30 (give or take a few orders of magnitude) “stuff” and the rest in empty space.  By way of comparison, about 1.3E32 grains of sand would fit inside the earth.  So matter is roughly as dense with “stuff” as one grain of sand is to our entire planet.

So now we have three data points to start our extrapolation.  Since the percentage of “stuff” that matter is made of is shrinking exponentially over time, we can’t plot our trend in normal scales, but need to use log-log scales.

And now, of course, we have string theory, which says that all subatomic particles are really just bits of string vibrating at specific frequencies, each string possibly having a width of the Planck length.  If so, that would make subatomic particles all but 1E-38 empty space, leaving our gold bar with just 1 part in 1E52 of “stuff”.

Gets kind of ridiculous doesn’t it?  Doesn’t anyone see where this is headed?

In fact, if particles are comprised of strings, why do we even need the idea of “stuff?”  Isn’t it enough to define the different types of matter by the single number – the frequency at which the string vibrates?

What is matter anyway?  It is a number assigned to a type of object that has to do with how that object behaves in a gravitational field.  In other words, it is just a rule.

We don’t really experience matter.  What we experience is electromagnetic radiation influenced by some object that we call matter (visual).  And the effect of the electromagnetic force rule due to the repulsion of charges between the electron shells of the atoms in our fingers and the electron shells of the atoms in the object (tactile).

In other words, rules.

In any case, if you extrapolate our scientific progress, it is easy to see that the ratio of “stuff” to “space” is trending toward zero.  Which means what?

That matter is most likely just data.  And the forces that cause us to experience matter the way we do are just rules about how data interacts with itself.

Data and Rules – that’s all there is.

Oh yeah, and Consciousness.

goldbar185

My Body, the Avatar

Have you ever wondered how much information the human brain can store?  A little analysis reveals some interesting data points…

The human brain contains an estimated 100 trillion synapses.  There doesn’t appear to be a finer level of structure to the neural cells, so this represents the maximum number of memory elements that a brain can hold.  Assume for a moment that each synapse can hold a single bit; then the brain’s capacity would be 100 trillion bits, or about 12.5 terabytes. There may be some argument that there is actually a distribution of brain function, or redundancy of data storage, which would reduce the memory capacity of the brain.  On the other hand, one might argue that synapses may not be binary and hence could hold somewhat more information.  So it seems that 12.5 TB is a fairly good and conservative estimate.

It has also been estimated (see “On the Information Processing Capabilities of the Brain: Shifting the Paradigm” by Simon Berkovich) that, in a human lifetime, the brain processes 3 million times that much data.  This all makes sense if we assume that most (99.99997%) of our memory data is discarded over time, due to lack of need.

But then, how would we explain the exceptional capabilities of autistic savants, or people with hyperthymesia, or eidetic memory (total recall).  It would have to be such that the memories that these individuals retrieve can not all be stored in the brain at the same time.  In other words, memories, or the record of our experiences, are not solely stored in the brain.  Some may be, such as those most recently used, or frequently needed.

Those who are trained in Computer Science will recognize the similarities between these characteristics and the idea of a cache memory, a high speed storage device that stores the most recently used, or frequently needed, data for quick access.

As cardiologist and science researcher Pim van Lommel said, “the computer does not produce the Internet any more than the brain produces consciousness.”

Why is this so hard to believe?

After all, there is no real proof that all memories are stored in the brain.  There is only research that shows that some memories are stored in the brain and can be triggered by electrically stimulating certain portions of the cerebral cortex.  By the argument above, I would say that experimental evidence and logic is on the side of non-local memory storage.

In a similar manner, while there is zero evidence that consciousness is an artifact of brain function, Dr. van Lommel has shown that there is extremely strong evidence that consciousness is not a result of brain activity.  It is enabled by the brain, but not seated there.

These two arguments – the non-local seat of consciousness and the non-local seat of memories are congruent and therefore all the more compelling for the case that our bodies are simply avatars.

Things We Can’t Feel – The Mystery Deepens

In my last blog “Things We Can’t See”, we explored the many different ways that our eyes, brains, and/or technology can fool us into seeing something that isn’t there or not seeing something that is.

So apparently, our sense of sight is not necessarily the most reliable sense in terms of identifying what is and isn’t in our objective reality.  We would probably suspect that our sense of touch is fairly foolproof; that is, if an object is “there”, we can “feel” it, right?

Not so fast.

First of all, we have a lot of the same problems with the brain as we did with the sense of sight.  The brain processes all of that sensory data from our nerve endings.  How do we know what the brain really does with that information?  Research shows that sometimes your brain can think that you are touching something that you aren’t or vice versa.  People who have lost limbs still have sensations in their missing extremities.  Hypnosis has been shown to have a significant effect in terms of pain control, which seems to indicate the mind’s capacity to override one’s tactile senses.  And virtual reality experiments have demonstrated the ability for the mind to be fooled into feeling something that isn’t there.

In addition, technology can be made to create havoc with our sense of touch, although the most dramatic of such effects are dozens of years into the future.  Let me explain…

Computer Scientist J. Storrs Hall developed the concept of a “Utility Fog.”  Imagine a “nanoscopic” object called a Foglet, which is an intelligent nanobot, capable of communicating with its peers and having arms that can hook together to form larger structures.  Trillions of these Foglets could conceivably fill a room and not be at all noticeable as long as they were in “invisible mode.”  In fact, not only might they be programmed to appear transparent to the sight, but they may be imperceptible to the touch.  This is not hard to imagine, if you allow that they could have sensors that detect your presence.  For example, if you punch your fist into a swarm of nanobots programmed to be imperceptible, they would sense your motion and move aside as you swung your fist through the air.  But at any point, they could conspire to form a structure – an impenetrable wall, for example.  And then your fist would be well aware of their existence.  In this way, technology may be able to have a dramatic effect on our complete ability to determine what is really “there.”

nanobot

But even now, long before nanobot swarms are possible, the mystery really begins, as we have to dive deeply into what is meant by “feeling” something.

Feeling is the result of a part of our body coming in contact with another object.  That contact is “felt” by the interaction between the molecules of the body and the molecules of the object.

Even solid objects are mostly empty space.  If subatomic particles, such as neutrons, are made of solid mass, like little billiard balls, then 99.999999999999% of normal matter would still be empty space.  That is, of course, unless those particles themselves are not really solid matter, in which case, even more of space is truly empty, more about which in a bit.

So why don’t solid objects like your fist slide right through other solid objects like bricks?  Because of the repulsive effect that the electromagnetic force from the electrons in the fist apply against the electromagnetic force from the electrons in the brick.

But what about that neutron?  What is it made of?  Is it solid?  Is it made of the same stuff as all other subatomic particles?

The leading theories of matter do not favor the idea that subatomic particles are like little billiard balls of differing masses.  For example, string theorists speculate that all particles are made of the same stuff; namely, vibrating bits of string.  Except that they each vibrate at different frequencies.  Problem is, string theory is purely theoretical and really falls more in the mathematical domain than the scientific domain, inasmuch as there is no supporting evidence for the theory.  If it does turn out to be true, even the neutron is mostly empty space because the string is supposedly one-dimensional, with a theoretical cross section of a Planck length.

Here’s where it gets really interesting…

Neutrinos are an extremely common yet extremely elusive particle of matter.  About 100 trillion neutrinos generated in the sun pass through our bodies every second.  Yet they barely interact at all with ordinary matter.  Neutrino capture experiments consist of configurations such as a huge underground tank containing 100,000 gallons of tetrachloroethylene buried nearly a mile below the surface of the earth.  100 billion neutrinos strike every square centimeter of the tank per second.  Yet, any particular molecule of tetrachloroethylene is likely to interact with a neutrino only once every 10E36 seconds (which is 10 billion billion times the age of the universe).

The argument usually given for the neutrino’s elusiveness is that they are massless (and therefore not easily captured by a nucleus) and charge-less (and therefore not subject to the electromagnetic force).  Then again, photons are massless and charge-less and are easily captured, to which anyone who has spent too much time in the sun can attest.  So there has to be some other reason that we can’t detect neutrinos.  Unfortunately, given the current understanding of particle physics, no good answer is forthcoming.

And then there is dark matter.  This concept is the current favorite explanation for some anomalies around orbital speeds of galaxies.  Gravity can’t explain the anomalies, so dark matter is inferred.  If it really exists, it represents about 83% of the mass in the universe, but doesn’t interact again with any of the known forces with the exception of gravity.  This means that dark matter is all around us; we just can’t see it or feel it.

So it seems that modern physics allows for all sorts of types of matter that we can’t see or feel.  When you get down to it, the reason for this is that we don’t understand what matter is at all.  According to the standard model of physics, particles should have no mass, unless there is a special quantum field that pervades the universe and gives rise to mass upon interacting with those particles.  Unfortunately, for that to have any credibility, the signature particle, the Higgs boson, would have to exist.  Thus far, it seems to be eluding even the most powerful of particle colliders.  One alternative theory of matter has it being an emergent property of particle fluctuations in the quantum vacuum.

For a variety of reasons, some of which are outlined in “The Universe – Solved!” and many others which have come to light since I wrote that book, I suspect that ultimately matter is simply a property of an entity that is described purely by data and a set of rules, driven by a complex computational mechanism.  Our attempt to discover the nature of matter is synonymous with our attempt to discover those rules and associated fundamental constants (data).

In terms of other things that we can’t perceive, new age enthusiasts might call out ghosts, spirits, auras, and all sorts of other mysterious invisible and tenuous entities.

starwarsghosts

Given that we know that things exist that we can’t perceive, one has to wonder if it might be possible for macroscopic objects, or even macroscopic entities that are driven by similar energies as humans, to be made from stuff that we can only tenuously detect, not unlike neutrinos or dark matter.  Scientists speculate about multiple dimensions and parallel universes via Hilbert Space and other such constructs.  If such things exist (and wouldn’t it be hypocritical of anyone to speculate or work out the math for such things if it weren’t possible for them to exist?), the rules that govern our interaction with them, across the dimensions, are clearly not at all understood.  That doesn’t mean that they aren’t possible.

In fact, the scientific world is filled with trends leading toward the implication of an information-based reality.

In which almost anything is possible.

Things We Can’t See

When you think about it, there is a great deal out there that we can’t see.

Our eyes only respond to a very narrow range of electromagnetic radiation.  The following diagram demonstrates just how narrow our range of vision compared to the overall electromagnetic spectrum.

em_spectrum

So we can’t see anything that generates or reflects wavelengths equal to or longer than infrared, as the following image demonstrates.  Even the Hubble Space Telescope can’t see the distant infrared galaxy that the Spitzer Space Telescope can see with its infrared sensors.

(http://9-4fordham.wikispaces.com/Electro+Magnetic+Spectrum+and+light)

600px-Distant_Galaxy_in_Visible_and_Infrared

And we can’t see anything that generates or reflects wavelengths equal to or shorter than ultraviolet, as the image from NASA demonstrates at left. Only instruments with special sensors that can detect ultraviolet or x-rays can see some of the objects in the sky.

Of course, we can’t see things that are smaller in size than about 40 microns, which includes germs and molecules.

 

 

We can’t see things that are camouflaged by technology, such as the Mercedes in the following picture.

invisiblemercedes

Sometimes, it isn’t our eyes that can’t sense something that is right in front of us, but rather, our brain.  We actually stare at our noses all day long but don’t notice because our brains effectively subtract it out from our perception, given that we don’t really need it.  Our brains also fill in the imagery that is missing from the blind spot that we all have due to the optic nerve in our retinas.

In addition to these limitations of static perception, there are significant limitations to how we perceive motion.  It actually does not take much in terms of speed to render something invisible to our perception.

Clearly, we can’t see something zip by as fast as a bullet, which might typically move at speeds of 700 mph or more.  And yet, a plane moving at 700 mph is easy to see from a distance.  Our limitations of motion perception are a function of the speed of the object and the size of the image that it casts upon your retina; e.g. for a given speed, the further away something is, the larger it has to be to register in our conscious perception.  This is because our perception of reality refreshes no more than 13-15 times per second, or every 77 ms. So, if something is moving so fast that it passes by our frame of perception in less than 77 ms or so, or it is so small that it doesn’t make a significant impression in our conscious perception within that time period, we simply won’t be aware of its existence.

It makes one wonder what kinds of things may be in our presence, but moving too quickly to be observed.  Some researchers have captured objects on high-speed cameras, for which there appears to be no natural explanation.  For example, there is this strange object captured on official NBC video at an NFL football game in 2011:  Whether these objects have mundane explanations or might be hints of something a little more exotic, one thing is for certain: our eye cannot capture them.  They are effectively invisible to us, yet exist in our reality.

In my next blog we will dive down the rabbit hole and explore the real possibilities that things exist around us that we can’t even touch.

FTL Neutrinos are not Dead Yet!

So, today superluminal neutrinos are out.  Another experiment called ICARUS, from the same laboratory whence the OPERA results came, recently announced their findings that neutrinos do not travel faster than light.

It is a little surprising how eager scientists were to get that experimental anomaly behind them.  Almost as if the whole idea so threatened the foundation of their world that  they couldn’t wait to jump on the anti-FTL-neutrino bandwagon.  For a complete non-sequitor, I am reminded of the haste with which Oswald was fingered as JFK’s assassin.  No trial needed.  Let’s just get this behind us.

A blog on the Discover Magazine site referred to this CERN announcement as “the nail in the coffin” of superluminal neutrinos.  Nature magazine reported that Adam Falkowski, a physicist from the University of Paris-South said “The OPERA case is now conclusively closed”

Really?

Since when are two conflicting results an indication that one of them is conclusive?  It seems to me that until the reason for OPERA’s superluminal results is determined, the case is still open.

In software engineering, there is such a thing as a non-reproduceable defect.  A record of the defect is opened and if the defect is not reproduceable, it just sits there.  Over time, if the defect is no longer observed, it becomes less and less relevant and the priority of the defect decreases.  Eventually, one assumes that it was due to “user error” or something, and it loses status as a bona fide defect.

The same should hold for anomalous FTL events.  If they are reproduceable, we have new physics.  If not, it is still an anomaly to be investigated and root cause analyzed.

In fact, interestingly enough, the arxiv article shows that the average neutrino speed in the NEW experiment is still .3 ns faster than light speed would predict and more neutrinos were reported faster than the speed of light than slower.  Admittedly, this is well within the experimental error bar, but it does seem to indicate that neutrinos travel at c, the speed of light, which means that they should not have any mass.  Yet other experiments indicate that they do indeed have mass.

And then there was the result of the MINOS experiment in 2007 which also indicated faster than light neutrinos, although not at as statistically significant of a level as with OPERA.

So, we are still left with many neutrino anomalies:

– Two experiments that indicate faster than light speeds.
– Conflicting experiments regarding the possibility of neutrino mass.
– Mysterious transformations of one type of neutrino to another mid-flight.
– And the very nature of their very tenuous interaction with “normal matter,” not unlike dark matter.

Theories abound regarding the possibilities of neutrinos or dark matter existing in, or traveling through, higher dimensions.

How can anyone be so confident that there is a nail in the coffin of any scientific anomaly?

bringoutyerdead

The Observer Effect and Entanglement are Practically Requirements of Programmed Reality

Programmed Reality has been an incredibly successful concept in terms of explaining the paradoxes and anomalies of Quantum Mechanics, including non-Reality, non-Locality, the Observer Effect, Entanglement, and even the Retrocausality of John Wheeler’s Delayed Choice Quantum Eraser experiment.

I came up with those explanations by thinking about how Programmed Reality could explain such curiosities.

But I thought it might be interesting to view the problem in the reverse manner.  If one were to design a universe-simulating Program, what kinds of curiosities might result from an efficient design?  (Note: I fully realize that any entity advanced enough to simulate the universe probably has a computational engine that is far more advanced that we can even imagine; most definitely not of the von-Neumann variety.  Yet, we can only work with what we know, right?)

So, if I were to create such a thing, for instance, I would probably model data in the following manner:

For any space unobserved by a conscious entity, there is no sense in creating the reality for that space in advance.  It would unnecessarily consume too many resources.

For example, consider the cup of coffee on your desk.  Is it really necessary to model every single subatomic particle in the cup of coffee in order to interact with it in the way that we do?  Of course not.  The total amount of information contained in that cup of coffee necessary to stimulate our senses in the way that it does (generate the smell that it does; taste the way it does; feel the way it does as we drink it; swish around in the cup the way that it does; have the little nuances, like tiny bubbles, that make it look real; have the properties of cooling at the right rate to make sense, etc.) might be 10MB or so.  Yet, the total potential information content in a cup of coffee is 100,000,000,000 MB, so there is a ratio of perhaps 100 trillion in compression that can be applied to an ordinary object.

But once you decide to isolate an atom in that cup of coffee and observe it, the Program would then have to establish a definitive position for that atom, effectively resulting in the collapse of the wave function, or decoherence.  Moreover, the complete behavior of the atom, at that point, might be forever under control of the program.  After all, why delete the model once observed, in the event (probably fairly likely) that it will be observed again at some point in the future.  Thus, the atom would have to be described by a finite state machine.  It’s behavior would be decided by randomly picking values of the parameters that drive that behavior, such as atomic decay.  In other words, we have created a little mini finite state machine.

So, the process of “zooming in” on reality in the Program would have to result in exactly the type of behavior observed by quantum physicists.  In other words, in order to be efficient, resource-wise, the Program decoheres only the space and matter that it needs to.

Let’s say we zoom in on two particles at the same time; two that are in close proximity to each other.  Both would have to be decohered by the Program.  The decoherence would result in the creation of two mini finite state machines.  Using the same random number seed for both will cause the state machines to forever behave in an identical manner.

No matter how far apart you take the particles.  i.e…

Entanglement!

So, Observer Effect and Entanglement might both be necessary consequences of an efficient Programmed Reality algorithm.

 

coffee185 entanglement185

Is Cosmology Heading for a Date with a Creator?

According to a recent article in New Scientist magazine,  physicists “can’t avoid a creation event.”  (sorry, you have to be a subscriber to read the full article.)  It boils down to the need to show that the universe could have been eternal into the past.  Not eternal and there needs to be a creator.  Even uber-atheist Stephen Hawking acknowledges that a beginning to the universe would be “a point of creation… where science broke down. One would have to appeal to religion and the hand of God.”

Apparently, there are three established theories for how to get around the idea of a creator of the big bang.  But cosmologist Alexander Vilenkin demonstrated last week how all of those theories now necessitate a beginning:

1. The leading idea has been the possibility that the universe has been eternally expanding (inflating).  Recent analysis, however, shows that inflation has a lower limit preventing it from being eternal in the past.

2. Another possibility was the cyclic model, but Vilenkin has shot a hole in that one as well, courtesy of the second law of thermodynamics.  Either every cycle would have to be more disordered, in which case after an infinite number of cycles, our current cycle should be heat death (it isn’t), or the universe would have to be getting bigger with each cycle, implying a creation event at some cycle in the past.

3. The final hope for the atheistic point of view was a lesser known proposal called the cosmic egg.  But Vilenkin showed last year that this could not have existed eternally due to quantum instabilities.

Is science slowly coming to terms with the idea of an intelligent designer of the universe?  The evidence is overwhelming and Occam’s Razor points to a designer, yet science clings to the anti-ID point of view as if it is a religion.

Ironic.

Pathological Skepticism

“All great truths began as blasphemies” – George Bernard Shaw

  • In the 1800’s, the scientific community viewed reports of rocks falling from the sky as “pseudoscience” and those who reported them as “crackpots,” only because it didn’t fit in with the prevailing view of the universe. Today, of course, we recognize that these rocks could be meteorites and such reports are now properly investigated.
  • In 1827, Georg Ohm’s initial publication of what became “Ohm’s Law” met with ridicule, dismissal, and was called “a web of naked fantasies.” The German Minister of Education proclaimed that “a professor who preached such heresies was unworthy to teach science.” 20 yrs passed before scientists began to recognize its importance.
  • Louis Pasteur’s theory of germs was called “ridiculous fiction” by Pierre Pachet, Professor of Physiology at Toulouse in1872.
  • Spanish researcher Marcelino de Sautuola discovered cave art in Altamira cave (northern Spain), which he recognized as stone age and published a paper about it in 1880.  His integrity was violently attacked by the archaeological community, and he died disillusioned and broken.  Yet he was vindicated 10 years after death.
  • Lord Haldane, the Minister of War in Britain, said that “the aeroplane will never fly” in 1907.  Ironically, this was four years after the Wright Brothers made their first successful flight at Kitty Hawk, North Carolina.  After Kitty Hawk, the Wrights flew in open fields next to a busy rail line in Dayton OH for almost an entire year. US authorities refused to come to the demos, while Scientific American published stories about “The Lying Brothers.”
  • In 1964, physicist George Zweig proposed the existence of quarks.  As a result of this theory, he was rejected for position at major university and considered a “charlatan.”  Today, of course, it is an accepted part of standard nuclear model.

Note that these aren’t just passive disagreements.  The skeptics use active and angry language, with words like “charlatan,” “ridiculous,” lying,” “crackpot,” and “pseudoscience.”

This is partly due to a natural psychological effect, known as “fear of the unknown” or “fear of change.”  Psychologists who have studied human behavior have more academic sounding names for it, such as the “Mere Exposure Effect”, “Familiarity Principle”, or Neophobia (something that might have served Agent Smith well).  Ultimately, this may be an artifact of evolution.  Hunter-gatherers did not pass on their genes if they had a habit of eating weird berries, venturing too close to the saber-toothed cats, or other unconventional activities.  But we are no longer hunter-gatherers.  For the most part, we shouldn’t fear the unknown.  We should feel empowered to challenge assumptions.  The scientific method can weed out any undesirable ideas naturally.

But, have you also noticed how the agitation ratchets up the more you enter the realm of the “expert?”

“The expert knows more and more about less and less until he knows everything about nothing.” – Mahatma Gandhi

This is because the expert may have a lot to lose if they stray too far from the status quo.  Their research funding, tenure, jobs, reputations are all at stake.  This is unfortunate, because it feeds this unhealthy behavior.

So I thought I would do my part to remind experts and non-experts alike that breakthroughs only occur when we challenge conventional thinking, and we shouldn’t be afraid of them.

The world is full of scared “experts”, but nobody will ever hear of them.  But they will hear about the brave ones, who didn’t fear to challenge the status quo.  People like Copernicus, Einstein, Georg Ohm, Steve Jobs, and Elon Musk.

And it isn’t like we are so enlightened today that such pathological skepticism no longer occurs.

Remember Stanley Pons and Martin Fleischmann?  Respected electrochemists, ridiculed out of their jobs and their country by skeptics.  Even “experts” violently contradicted each other:

  • “It’s pathological science,” said physicist Douglas Morrison, formerly of CERN. “The results are impossible.”
  • “There’s very strong evidence that low-energy nuclear reactions do occur” said George Miley (who received Edward Teller medal for research in hot fusion.). “Numerous experiments have shown definitive results – as do my own.”

Some long-held assumptions are being overturned as we speak.  Like LENR (Low Energy Nuclear Reactions; the new, less provocative name for cold fusion.

And maybe the speed of light as an ultimate speed limit.

These are exciting times for science and technology.  Let’s stay open minded enough to keep them moving.

Yesterday’s Sci-Fi is Tomorrow’s Technology

It is the end of 2011 and it has been an exciting year for science and technology.  Announcements about artificial life, earthlike worlds, faster-than-light particles, clones, teleportation, memory implants, and tractor beams have captured our imagination.  Most of these things would have been unthinkable just 30 years ago.

So, what better way to close out the year than to take stock of yesterday’s science fiction in light of today’s reality and tomorrow’s technology.  Here is my take:

yesterdaysscifi

Time to Revise Relativity?: Part 2

In “Time to Revise Relativity: Part 1”, I explored the idea that Faster than Light Travel (FTL) might be permitted by Special Relativity without necessitating the violation of causality, a concept not held by most mainstream physicists.

The reason this idea is not well supported has to do with the fact that Einstein’s postulate that light travels the same speed in all reference frames gave rise to all sorts of conclusions about reality, such as the idea that it is all described by a space-time that has fundamental limits to its structure.  The Lorentz factor is a consequence of this view of reality, and so it’s use is limited to subluminal effects and is undefined in terms of its use in calculating relativistic distortions past c.

Lorentz Equation

So then, what exactly is the roadblock to exceeding the speed of light?

Yes, there may be a natural speed limit to the transmission of known forces in a vacuum, such as the electromagnetic force.  And there may certainly be a natural limit to the speed of an object at which we can make observations utilizing known forces.  But, could there be unknown forces that are not governed by the laws of Relativity?

The current model of physics, called the Standard Model, incorporates the idea that all known forces are carried by corresponding particles, which travel at the speed of light if massless (like photons and gluons) or less than the speed of light if they have mass (like gauge bosons), all consistent with, or derived from the assumptions of relativity.  Problem is, there is all sorts of “unfinished business” and inconsistencies with the Standard Model.  Gravitons have yet to be discovered, Higgs bosons don’t seem to exist, gravity and quantum mechanics are incompatible, and many things just don’t have a place in the Standard Model, such as neutrino oscillations, dark energy, and dark matter.  Some scientists even speculate that dark matter is due to a flaw in the theory of gravity.  So, given the incompleteness of that model, how can anyone say for certain that all forces have been discovered and that Einstein’s postulates are sacrosanct?

Given that barely 100 years ago we didn’t know any of this stuff, imagine what changes to our understanding of reality might happen in the next 100 years.  Such as these Wikipedia entries from the year 2200…

–       The ultimate constituent of matter is nothing more than data

–       A subset of particles and corresponding forces that are limited in speed to c represent what used to be considered the core of the so-called Standard Model and are consistent with Einstein’s view of space-time, the motion of which is well described by the Special Theory of Relativity.

–       Since then, we have realized that Einsteinian space-time is an approximation to the truer reality that encompasses FTL particles and forces, including neutrinos and the force of entanglement.  The beginning of this shift in thinking occurred due to the first superluminal neutrinos found at CERN in 2011.

So, with that in mind, let’s really explore a little about the possibilities of actually cracking that apparent speed limit…

For purposes of our thought experiments, let’s define S as the “stationary” reference frame in which we are making measurements and R as the reference frame of the object undergoing relativistic motion with respect to S.  If a mass m is traveling at c with respect to S, then measuring that mass in S (via whatever methods could be employed to measure it; energy, momentum, etc.) will give an infinite result.  However, in R, the mass doesn’t change.

What if m went faster than c, such as might be possible with a sci-fi concept like a “tachyonic afterburner”?  What would an observer at S see?

Going by our relativistic equations, m now becomes imaginary when measured from S because the argument in the square root of the mass correction factor is now negative.  But what if this asymptotic property really represents more of an event horizon than an impenetrable barrier?  A commonly used model for the event horizon is the point on a black hole at which gravity prevents light from escaping.  Anything falling past that point can no longer be observed from the outside.  Instead it would look as if that object froze on the horizon, because time stands still there.  Or so some cosmologists say.  This is an interesting model to apply to the idea of superluminality as mass m continues to accelerate past c.

From the standpoint of S, the apparent mass is now infinite, but that is ultimately based on the fact that we can’t perceive speeds past c.  Once something goes past c, one of two things might happen.  The object might disappear from view due to the fact that the light that it generated that would allow us to observe it can’t keep up with its speed.  Alternatively, invoking the postulate that light speed is the same in all reference frames, the object might behave like it does on the event horizon of the black hole – forever frozen, from the standpoint of S, with the properties that it had when it hit light speed.  From R, everything could be hunky dory.  Just cruising along at warp speed.  No need to say that it is impossible because mass can’t exceed infinity, because from S, the object froze at the event horizon.  Relativity made all of the correct predictions of properties, behavior, energy, and mass prior to light speed.  Yet, with this model, it doesn’t preclude superluminality.  It only precludes the ability to make measurements beyond the speed of light.

That is, of course, unless we can figure out how to make measurements utilizing a force or energy that travels at speeds greater than c.  If we could, those measurements would yield results with correction factors only at speeds relatively near THAT speed limit.

Let’s imagine an instantaneous communication method.  Could there be such a thing?

One possibility might be quantum entanglement.  John Wheeler’s Delayed Choice Quantum Eraser experiment seems to imply non-causality and the ability to erase the past.  Integral to this experiment is the concept of entanglement.  So perhaps it is not a stretch to imagine that entanglement might embody a communication method that creates some strange effects when integrated with observational effects based on traditional light and sight methods.

What would the existence of that method do to relativity?   Nothing, according to the thought experiments above.

There are, however, some relativistic effects that seem to stick, even after everything has returned to the original reference frame.  This would seem to violate the idea that the existence of an instantaneous communication method invalidates the need for relativistic correction factors applied to anything that doesn’t involve light and sight.

For example, there is the very real effect that clocks once moving at high speeds (reference frame R) exhibit a loss of time once they return to the reference frame S, fully explained by time dilation effects.  It would seem that, using this effect as a basis for a thought experiment like the twin paradox, there might be a problem with the event horizon idea.  For example, let us imagine Alice and Bob, both aged 20.  After Alice travels at speed c to a star 10 light years away and returns, her age should still be 20, while Bob is now 40.  If we were to allow superluminal travel, it would appear that Alice would have to get younger, or something.  But, recalling the twin paradox, it is all about the relative observations that were made by Bob in reference frame S, and Alice, in reference frame R, of each other.  Again, at superluminal speeds, Alice may appear to hit an event horizon according to Bob.  So, she will never reduce her original age.

But what about her?  From her perspective, her trip is instantaneous due to an infinite Lorentz contraction factor; hence she doesn’t age.  If she travels at 2c, her view of the universe might hit another event horizon, one that prevents her from experiencing any Lorentz contraction beyond c; hence, her trip will still appear instantaneous, no aging, no age reduction.

So why would an actual relativistic effect like reduced aging, occur in a universe where an infinite communication speed might be possible?  In other words, what would tie time to the speed of light instead of some other speed limit?

It may be simply because that’s the way it is.  It appears that relativistic equations may not necessarily impose a barrier to superluminal speeds, superluminal information transfer, nor even acceleration past the speed of light.  In fact, if we accept that relativity says nothing about what happens past the speed of light, we are free to suggest that the observable effects freeze at c. Perhaps traveling past c does nothing more than create unusual effects like disappearing objects or things freezing at event horizons until they slow back down to an “observable” speed.  We certainly don’t have enough evidence to investigate further.

But perhaps CERN has provided us with our first data point.

Time Warp