RIP Kardashev Civilization Scale

In 1964, Soviet astronomer Nikolai Kardashev proposed a model for categorizing technological civilizations.  He identified 4 levels or “Types”, simplified as follows:

Type 0 – Civilization that has not yet learned to utilize the full set of resources available to them on their home planet (e.g. oceans, tidal forces, geothermal forces, solar energy impinging upon the planet, etc.)

Type 1 – Civilization that fully harnesses, controls, and utilizes the resources of their planet.

Type 2 – Civilization that fully harnesses, controls, and utilizes the resources of their star system.

Type 3 – Civilization that fully harnesses, controls, and utilizes the resources of their galaxy.


As with philosophical thought, literature, art, music, and other concepts and artifacts generated by humanity, technological and scientific pursuits reflect the culture of the time.  In 1964, we were on the brink of nuclear war.  The space race was in full swing and the TV show “Star Trek” was triggering the imagination of laymen and scientists alike.  We thought in terms of conquering people and ideas, and in terms of controlling resources.  What countries are in the Soviet bloc?  What countries are under US influence?  Who has access to most of the oil?  Who has the most gold, the most uranium?

The idea of dominating the world was evident in our news and our entertainment.  Games like Risk and Monopoly were unapologetically imperialistic.  Every Bond plot was about world domination.

Today, many of us find these ideas offensive.  To start with, imperialism is an outdated concept founded on the assumption of superiority of some cultures over others.  The idea of harnessing all planetary resources is an extension of imperialistic mentality, one that adds all other life forms to the entities that we need to dominate.  Controlling planetary resources for the sake of humanity is tantamount to stealing those same resources from other species that may need them.  Further, our attempt to control resources and technology can lead to some catastrophic outcomes.  Nuclear Armageddon, grey goo, overpopulation, global warming, planetary pollution, and (human-caused) mass extinctions are all examples of potentially disastrous consequences of attempts to dominate nature or technology without fully understanding what we are doing.

I argue in “Alien Hunters Still Thinking Inside The Box (or Dyson Sphere)” that attempting to fully harness all of the energy from the sun is increasingly unnecessary and unlikely to our evolution as a species.  Necessary energy consumption per capita is flattening for developing cultures and declining for mature ones.  Technological advances allow us to get much more useful output from our devices as time goes forward.  And humanity is beginning to de-emphasize raw size and power as a desirable attribute (for example, see right sizing economic initiatives) and instead focus on the value of consciousness.

So, certainly the hallmarks of advanced civilizations are not going to be anachronistic metrics of how much energy they can harness.  So what metrics might be useful?

How about:  Have they gotten off their planet?  Have they gotten out of their solar system?  Have they gotten out of their galaxy?

Somehow, I feel that even this is misleading.  Entanglement shows that everything is interconnected.  The observer effect demonstrates that consciousness transcends matter.  So perhaps the truly advanced civilizations have learned that they do not need to physically travel, but rather mentally travel.

How about: How little of an impact footprint do they leave on their planet?

The assumption here is that advanced civilizations follow a curve like the one below, whereby early in their journey they have a tendency to want to consume resources, but eventually evolve to have less and less of a need to consume or use energy.


How about: What percentage of their effort is expended upon advancing the individual versus the society, the planetary system, or the galactic system?


How about: Who cares?  Why do we need to assign a level to a civilization anyway?  Is there some value to having a master list of evolutionary stage of advanced life forms?  So that we know who to keep an eye on?  That sounds very imperialistic to me.

Of course, I am as guilty of musing about the idea of measuring the level of evolution of a species through a 2013 cultural lens as Kardashev was doing so through a 1964 cultural lens.  But still, it is 50 years hence and time to either revise or retire an old idea.

Ever Expanding Horizons

Tribal Era

tribalera200Imagine the human world tens of thousands of years ago.  A tribal community lived together, farming, hunting, trading, and taking care of each other.  There was plenty of land to support the community and as long as there were no strong forces driving them to move, they stayed where they were, content.  As far as they knew, “all that there is” was just that community and the land that was required to sustain it.  We might call this the Tribal Era.

Continental Era

continentalera200But, at some point, for whatever reason – drought, restlessness, desire for a change of scenery – another tribe moved into the first tribe’s territory.  For the first time, that tribe realized that the world was bigger than their little community.  In fact, upon a little further exploration, they realized that the boundaries of “all that there is” just expanded to the continent on which they lived, and there was a plethora of tribes in this new greater community.  The horizon of their reality just reached a new boundary and their community was now a thousand fold larger than before.

Planetary Era

planetaryera200According to researchers, the first evidence of cross-oceanic exploration was about 9000 years ago.  Now, suddenly, this human community may have been subject to an invasion of an entirely different race of people with different languages coming from a place that was previously thought to not exist.  Again, the horizon expands and “all that there is” reaches a new level, one that consists of the entire planet.

Solar Era

The Ancient Greek philosophers and astronomers recognized the existence of other solarera200planets.  Gods were thought to have come from the sun or elsewhere in the heavens, which consisted of a celestial sphere that wasn’t too far out away from the surface of our planet.

Imaginations ran wild as horizons expanded once again.

Galactic Era

galacticera200In 1610, Galileo looked through his telescope and suddenly humanity’s horizon expanded by another level.  Not only did the other planets resemble ours, but it was clear that the sun was the center of the known universe, stars were extremely far away, there were strange distant nebulae that were more than nearby clouds of debris, and the Milky Way consisted of distant stars.  In other worlds, “all that there is” became our galaxy.

Universal Era

universalera200A few centuries later, in 1922, it was time to expand our reality horizon once again, as the 100-inch telescope at Mount Wilson revealed that some of those fuzzy nebulae were actually other galaxies.  The concept of deep space and “Universe” was born and new measurement techniques courtesy of Edwin Hubble showed that “all that there is” was actually billions of times more than previously thought.

Multiversal Era

multiversalera200These expansions of “all that there is” are happening so rapidly now that we are still debating the details about one worldview, while exploring the next, and being introduced to yet another.  Throughout the latter half of the 20th century, a variety of ideas were put forth that expanded our reality horizon to the concept of many (some said infinite) parallel universes.  The standard inflationary big bang theory allowed for multiple Hubble volumes of universes that are theoretically within our same physical space, but unobservable due to the limitations of the speed of light.  Bubble universes, MWI, and many other theories exist but lack any evidence.  In 2003, Max Tegmark framed all of these nicely in his concept of 4 levels of Multiverse.

I sense one of those feelings of acceleration with the respect to the entire concept of expanding horizons, as if our understanding of “all that there is” is growing exponentially.  I was curious to see how exponential it actually was, so I took the liberty of plotting each discrete step in our evolution of awareness of “all that there is” on a logarithmic plot and guess what?

Almost perfectly exponential! (see below)


Dramatically, the trend points to a new expansion of our horizons in the past 10 years or so.  Could there really be a something beyond a multiverse of infinitely parallel universes?  And has such a concept recently been put forth?

Indeed there is and it has.  And, strangely, it isn’t even something new.  For millennia, the spiritual side of humanity has explored non-physical realities; Shamanism, Heaven, Nirvana, Mystical Experiences, Astral Travel.  Our Western scientific mentality that “nothing can exist that cannot be consistently and reliably reproduced in a lab” has prevented many of us from accepting these notions.  However, there is a new school of thought that is based on logic, scientific studies, and real data (if your mind is open), as well as personal knowledge and experience.  Call it digital physics (Fredkin), digital philosophy, simulation theory (Bostrom), programmed reality (yours truly), or My Big TOE (Campbell).  Tom Campbell and others have taken the step of incorporating into this philosophy the idea of non-material realms.  Which is, in fact, a new expansion of “all that there is.”  While I don’t particularly like the term “dimensional”, I’m not sure that we have a better descriptor.

Interdimensional Era

interdiensionalera200Or maybe we should just call it “All That There Is.”

At least until a few years from now.

Alien Hunters Still Thinking Inside The Box (or Dyson Sphere)

As those who are familiar with my writing already know, I have long thought that the SETI program was highly illogical, for a number of reason, some of which are outlined here and here.

To summarize, it is the height of anthropomorphic and unimaginative thinking to assume that ET will evolve just like we did and develop radio technology at all.  Even if they did, and followed a technology evolution similar to our own, the era of high-powered radio broadcasts should be insignificant in relation to the duration of their evolutionary history.  In our own case even, that era is almost over, as we are moving to highly networked and low-powered data communication (e.g. Wi-Fi), which is barely detectable a few blocks away, let alone light years.  And even if we happened to overlap a 100-year radio broadcast era of a civilization in our galactic neighborhood, they would still never hear us, and vice versa, because the signal level required to reliably communicate around the world becomes lost in the noise of the cosmic microwave background radiation before it even leaves the solar system.

So, no, SETI is not the way to uncover extraterrestrial intelligences.

Dyson Sphere

Some astronomers are getting a bit more creative and are beginning to explore some different ways of detecting ET.  One such technique hinges on the concept of a Dyson Sphere.  Physicist Freeman Dyson postulated the idea in 1960, theorizing that advanced civilizations will continuously increase their demand for energy, to the point where they need to capture all of the energy of the star that they orbit.  A possible mechanism for doing so could be a network of satellites surrounding the solar system and collecting all of the energy of the star.  Theoretically, a signature of a distant Dyson Sphere would be a region of space emitting no visible light but generating high levels of infrared radiation as waste.  Some astronomers have mapped the sky over the years, searching for such signatures, but to no avail.

Today, a team at Penn State is resuming the search via data from infrared observatories WISE and Spitzer.  Another group from Princeton has also joined in the search, but are using a different technique by searching for dimming patterns in the data.

I applaud these scientists who are expanding the experimental boundaries a bit.  But I doubt that Dyson Spheres are the answer.  There are at least two flaws with this idea.

First, the assumption that we will continuously need more energy is false.  Part of the reason for this is the fact that once a nation has achieved a particular level of industrialization and technology, there is little to drive further demand.  The figure below, taken from The Atlantic article “A Short History of 200 Years of Global Energy Use” demonstrates this clearly.


In addition, technological advances make it cheaper to obtain the same general benefit over time.  For example, in terms of computing, performing capacity per watt has increased by a factor of over one trillion in the past 50 years.  Dyson was unaware of this trend because Moore’s Law hadn’t been postulated until 1965.  Even in the highly corrupt oil industry, with their collusion, lobbying, and artificial scarcity, performance per gallon of gas has steadily increased over the years.

The second flaw with the Dyson Sphere argument is the more interesting one – the assumptions around how humans will evolve.  I am sure that in the booming 1960s, it seemed logical that we would be driven by the need to consume more and more, controlling more and more powerful tools as time went on.  But, all evidence actually points to the contrary.

We are in the beginning stages of a new facet of evolution as a species.  Not a physical one, but a consciousness-oriented one.  Quantum Mechanics has shown us that objective reality doesn’t exist.  Scientists are so frightened by the implications of this that they are for the most part in complete denial.  But the construct of reality is looking more and more like it is simply data.  And the evidence is overwhelming that consciousness is controlling the body and not emerging from it.  As individuals are beginning to understand this, they are beginning to recognize that they are not trapped by their bodies, nor this apparent physical reality.

Think about this from the perspective of the evolution of humanity.  If this trend continues, why will we even need the body?

Robert Monroe experienced a potential future (1000 years hence), which may be very much in line with the mega-trends that I have been discussing on “No sound, it was NVC [non-vocal communication]! We made it! Humans did it! We made the quantum jump from monkey chatter and all it implied.” (“Far Journeys“)

earthWe may continue to use the (virtual) physical reality as a “learning lab”, but since we won’t really need it, neither will we need the full energy of the virtual star.  And we can let virtual earth get back to the beautiful virtual place it once was.

THIS is why astronomers are not finding any sign of intelligent life in outer space, no matter what tools they use.  A sufficiently advanced civilization does not communicate using monkey chatter, nor any technological carrier like radio waves.

They use consciousness.

So will we, some day.

Things We Can’t Feel – The Mystery Deepens

In my last blog “Things We Can’t See”, we explored the many different ways that our eyes, brains, and/or technology can fool us into seeing something that isn’t there or not seeing something that is.

So apparently, our sense of sight is not necessarily the most reliable sense in terms of identifying what is and isn’t in our objective reality.  We would probably suspect that our sense of touch is fairly foolproof; that is, if an object is “there”, we can “feel” it, right?

Not so fast.

First of all, we have a lot of the same problems with the brain as we did with the sense of sight.  The brain processes all of that sensory data from our nerve endings.  How do we know what the brain really does with that information?  Research shows that sometimes your brain can think that you are touching something that you aren’t or vice versa.  People who have lost limbs still have sensations in their missing extremities.  Hypnosis has been shown to have a significant effect in terms of pain control, which seems to indicate the mind’s capacity to override one’s tactile senses.  And virtual reality experiments have demonstrated the ability for the mind to be fooled into feeling something that isn’t there.

In addition, technology can be made to create havoc with our sense of touch, although the most dramatic of such effects are dozens of years into the future.  Let me explain…

Computer Scientist J. Storrs Hall developed the concept of a “Utility Fog.”  Imagine a “nanoscopic” object called a Foglet, which is an intelligent nanobot, capable of communicating with its peers and having arms that can hook together to form larger structures.  Trillions of these Foglets could conceivably fill a room and not be at all noticeable as long as they were in “invisible mode.”  In fact, not only might they be programmed to appear transparent to the sight, but they may be imperceptible to the touch.  This is not hard to imagine, if you allow that they could have sensors that detect your presence.  For example, if you punch your fist into a swarm of nanobots programmed to be imperceptible, they would sense your motion and move aside as you swung your fist through the air.  But at any point, they could conspire to form a structure – an impenetrable wall, for example.  And then your fist would be well aware of their existence.  In this way, technology may be able to have a dramatic effect on our complete ability to determine what is really “there.”


But even now, long before nanobot swarms are possible, the mystery really begins, as we have to dive deeply into what is meant by “feeling” something.

Feeling is the result of a part of our body coming in contact with another object.  That contact is “felt” by the interaction between the molecules of the body and the molecules of the object.

Even solid objects are mostly empty space.  If subatomic particles, such as neutrons, are made of solid mass, like little billiard balls, then 99.999999999999% of normal matter would still be empty space.  That is, of course, unless those particles themselves are not really solid matter, in which case, even more of space is truly empty, more about which in a bit.

So why don’t solid objects like your fist slide right through other solid objects like bricks?  Because of the repulsive effect that the electromagnetic force from the electrons in the fist apply against the electromagnetic force from the electrons in the brick.

But what about that neutron?  What is it made of?  Is it solid?  Is it made of the same stuff as all other subatomic particles?

The leading theories of matter do not favor the idea that subatomic particles are like little billiard balls of differing masses.  For example, string theorists speculate that all particles are made of the same stuff; namely, vibrating bits of string.  Except that they each vibrate at different frequencies.  Problem is, string theory is purely theoretical and really falls more in the mathematical domain than the scientific domain, inasmuch as there is no supporting evidence for the theory.  If it does turn out to be true, even the neutron is mostly empty space because the string is supposedly one-dimensional, with a theoretical cross section of a Planck length.

Here’s where it gets really interesting…

Neutrinos are an extremely common yet extremely elusive particle of matter.  About 100 trillion neutrinos generated in the sun pass through our bodies every second.  Yet they barely interact at all with ordinary matter.  Neutrino capture experiments consist of configurations such as a huge underground tank containing 100,000 gallons of tetrachloroethylene buried nearly a mile below the surface of the earth.  100 billion neutrinos strike every square centimeter of the tank per second.  Yet, any particular molecule of tetrachloroethylene is likely to interact with a neutrino only once every 10E36 seconds (which is 10 billion billion times the age of the universe).

The argument usually given for the neutrino’s elusiveness is that they are massless (and therefore not easily captured by a nucleus) and charge-less (and therefore not subject to the electromagnetic force).  Then again, photons are massless and charge-less and are easily captured, to which anyone who has spent too much time in the sun can attest.  So there has to be some other reason that we can’t detect neutrinos.  Unfortunately, given the current understanding of particle physics, no good answer is forthcoming.

And then there is dark matter.  This concept is the current favorite explanation for some anomalies around orbital speeds of galaxies.  Gravity can’t explain the anomalies, so dark matter is inferred.  If it really exists, it represents about 83% of the mass in the universe, but doesn’t interact again with any of the known forces with the exception of gravity.  This means that dark matter is all around us; we just can’t see it or feel it.

So it seems that modern physics allows for all sorts of types of matter that we can’t see or feel.  When you get down to it, the reason for this is that we don’t understand what matter is at all.  According to the standard model of physics, particles should have no mass, unless there is a special quantum field that pervades the universe and gives rise to mass upon interacting with those particles.  Unfortunately, for that to have any credibility, the signature particle, the Higgs boson, would have to exist.  Thus far, it seems to be eluding even the most powerful of particle colliders.  One alternative theory of matter has it being an emergent property of particle fluctuations in the quantum vacuum.

For a variety of reasons, some of which are outlined in “The Universe – Solved!” and many others which have come to light since I wrote that book, I suspect that ultimately matter is simply a property of an entity that is described purely by data and a set of rules, driven by a complex computational mechanism.  Our attempt to discover the nature of matter is synonymous with our attempt to discover those rules and associated fundamental constants (data).

In terms of other things that we can’t perceive, new age enthusiasts might call out ghosts, spirits, auras, and all sorts of other mysterious invisible and tenuous entities.


Given that we know that things exist that we can’t perceive, one has to wonder if it might be possible for macroscopic objects, or even macroscopic entities that are driven by similar energies as humans, to be made from stuff that we can only tenuously detect, not unlike neutrinos or dark matter.  Scientists speculate about multiple dimensions and parallel universes via Hilbert Space and other such constructs.  If such things exist (and wouldn’t it be hypocritical of anyone to speculate or work out the math for such things if it weren’t possible for them to exist?), the rules that govern our interaction with them, across the dimensions, are clearly not at all understood.  That doesn’t mean that they aren’t possible.

In fact, the scientific world is filled with trends leading toward the implication of an information-based reality.

In which almost anything is possible.

Is Cosmology Heading for a Date with a Creator?

According to a recent article in New Scientist magazine,  physicists “can’t avoid a creation event.”  (sorry, you have to be a subscriber to read the full article.)  It boils down to the need to show that the universe could have been eternal into the past.  Not eternal and there needs to be a creator.  Even uber-atheist Stephen Hawking acknowledges that a beginning to the universe would be “a point of creation… where science broke down. One would have to appeal to religion and the hand of God.”

Apparently, there are three established theories for how to get around the idea of a creator of the big bang.  But cosmologist Alexander Vilenkin demonstrated last week how all of those theories now necessitate a beginning:

1. The leading idea has been the possibility that the universe has been eternally expanding (inflating).  Recent analysis, however, shows that inflation has a lower limit preventing it from being eternal in the past.

2. Another possibility was the cyclic model, but Vilenkin has shot a hole in that one as well, courtesy of the second law of thermodynamics.  Either every cycle would have to be more disordered, in which case after an infinite number of cycles, our current cycle should be heat death (it isn’t), or the universe would have to be getting bigger with each cycle, implying a creation event at some cycle in the past.

3. The final hope for the atheistic point of view was a lesser known proposal called the cosmic egg.  But Vilenkin showed last year that this could not have existed eternally due to quantum instabilities.

Is science slowly coming to terms with the idea of an intelligent designer of the universe?  The evidence is overwhelming and Occam’s Razor points to a designer, yet science clings to the anti-ID point of view as if it is a religion.


Yesterday’s Sci-Fi is Tomorrow’s Technology

It is the end of 2011 and it has been an exciting year for science and technology.  Announcements about artificial life, earthlike worlds, faster-than-light particles, clones, teleportation, memory implants, and tractor beams have captured our imagination.  Most of these things would have been unthinkable just 30 years ago.

So, what better way to close out the year than to take stock of yesterday’s science fiction in light of today’s reality and tomorrow’s technology.  Here is my take:


Time to Revise Relativity?: Part 2

In “Time to Revise Relativity: Part 1”, I explored the idea that Faster than Light Travel (FTL) might be permitted by Special Relativity without necessitating the violation of causality, a concept not held by most mainstream physicists.

The reason this idea is not well supported has to do with the fact that Einstein’s postulate that light travels the same speed in all reference frames gave rise to all sorts of conclusions about reality, such as the idea that it is all described by a space-time that has fundamental limits to its structure.  The Lorentz factor is a consequence of this view of reality, and so it’s use is limited to subluminal effects and is undefined in terms of its use in calculating relativistic distortions past c.

Lorentz Equation

So then, what exactly is the roadblock to exceeding the speed of light?

Yes, there may be a natural speed limit to the transmission of known forces in a vacuum, such as the electromagnetic force.  And there may certainly be a natural limit to the speed of an object at which we can make observations utilizing known forces.  But, could there be unknown forces that are not governed by the laws of Relativity?

The current model of physics, called the Standard Model, incorporates the idea that all known forces are carried by corresponding particles, which travel at the speed of light if massless (like photons and gluons) or less than the speed of light if they have mass (like gauge bosons), all consistent with, or derived from the assumptions of relativity.  Problem is, there is all sorts of “unfinished business” and inconsistencies with the Standard Model.  Gravitons have yet to be discovered, Higgs bosons don’t seem to exist, gravity and quantum mechanics are incompatible, and many things just don’t have a place in the Standard Model, such as neutrino oscillations, dark energy, and dark matter.  Some scientists even speculate that dark matter is due to a flaw in the theory of gravity.  So, given the incompleteness of that model, how can anyone say for certain that all forces have been discovered and that Einstein’s postulates are sacrosanct?

Given that barely 100 years ago we didn’t know any of this stuff, imagine what changes to our understanding of reality might happen in the next 100 years.  Such as these Wikipedia entries from the year 2200…

–       The ultimate constituent of matter is nothing more than data

–       A subset of particles and corresponding forces that are limited in speed to c represent what used to be considered the core of the so-called Standard Model and are consistent with Einstein’s view of space-time, the motion of which is well described by the Special Theory of Relativity.

–       Since then, we have realized that Einsteinian space-time is an approximation to the truer reality that encompasses FTL particles and forces, including neutrinos and the force of entanglement.  The beginning of this shift in thinking occurred due to the first superluminal neutrinos found at CERN in 2011.

So, with that in mind, let’s really explore a little about the possibilities of actually cracking that apparent speed limit…

For purposes of our thought experiments, let’s define S as the “stationary” reference frame in which we are making measurements and R as the reference frame of the object undergoing relativistic motion with respect to S.  If a mass m is traveling at c with respect to S, then measuring that mass in S (via whatever methods could be employed to measure it; energy, momentum, etc.) will give an infinite result.  However, in R, the mass doesn’t change.

What if m went faster than c, such as might be possible with a sci-fi concept like a “tachyonic afterburner”?  What would an observer at S see?

Going by our relativistic equations, m now becomes imaginary when measured from S because the argument in the square root of the mass correction factor is now negative.  But what if this asymptotic property really represents more of an event horizon than an impenetrable barrier?  A commonly used model for the event horizon is the point on a black hole at which gravity prevents light from escaping.  Anything falling past that point can no longer be observed from the outside.  Instead it would look as if that object froze on the horizon, because time stands still there.  Or so some cosmologists say.  This is an interesting model to apply to the idea of superluminality as mass m continues to accelerate past c.

From the standpoint of S, the apparent mass is now infinite, but that is ultimately based on the fact that we can’t perceive speeds past c.  Once something goes past c, one of two things might happen.  The object might disappear from view due to the fact that the light that it generated that would allow us to observe it can’t keep up with its speed.  Alternatively, invoking the postulate that light speed is the same in all reference frames, the object might behave like it does on the event horizon of the black hole – forever frozen, from the standpoint of S, with the properties that it had when it hit light speed.  From R, everything could be hunky dory.  Just cruising along at warp speed.  No need to say that it is impossible because mass can’t exceed infinity, because from S, the object froze at the event horizon.  Relativity made all of the correct predictions of properties, behavior, energy, and mass prior to light speed.  Yet, with this model, it doesn’t preclude superluminality.  It only precludes the ability to make measurements beyond the speed of light.

That is, of course, unless we can figure out how to make measurements utilizing a force or energy that travels at speeds greater than c.  If we could, those measurements would yield results with correction factors only at speeds relatively near THAT speed limit.

Let’s imagine an instantaneous communication method.  Could there be such a thing?

One possibility might be quantum entanglement.  John Wheeler’s Delayed Choice Quantum Eraser experiment seems to imply non-causality and the ability to erase the past.  Integral to this experiment is the concept of entanglement.  So perhaps it is not a stretch to imagine that entanglement might embody a communication method that creates some strange effects when integrated with observational effects based on traditional light and sight methods.

What would the existence of that method do to relativity?   Nothing, according to the thought experiments above.

There are, however, some relativistic effects that seem to stick, even after everything has returned to the original reference frame.  This would seem to violate the idea that the existence of an instantaneous communication method invalidates the need for relativistic correction factors applied to anything that doesn’t involve light and sight.

For example, there is the very real effect that clocks once moving at high speeds (reference frame R) exhibit a loss of time once they return to the reference frame S, fully explained by time dilation effects.  It would seem that, using this effect as a basis for a thought experiment like the twin paradox, there might be a problem with the event horizon idea.  For example, let us imagine Alice and Bob, both aged 20.  After Alice travels at speed c to a star 10 light years away and returns, her age should still be 20, while Bob is now 40.  If we were to allow superluminal travel, it would appear that Alice would have to get younger, or something.  But, recalling the twin paradox, it is all about the relative observations that were made by Bob in reference frame S, and Alice, in reference frame R, of each other.  Again, at superluminal speeds, Alice may appear to hit an event horizon according to Bob.  So, she will never reduce her original age.

But what about her?  From her perspective, her trip is instantaneous due to an infinite Lorentz contraction factor; hence she doesn’t age.  If she travels at 2c, her view of the universe might hit another event horizon, one that prevents her from experiencing any Lorentz contraction beyond c; hence, her trip will still appear instantaneous, no aging, no age reduction.

So why would an actual relativistic effect like reduced aging, occur in a universe where an infinite communication speed might be possible?  In other words, what would tie time to the speed of light instead of some other speed limit?

It may be simply because that’s the way it is.  It appears that relativistic equations may not necessarily impose a barrier to superluminal speeds, superluminal information transfer, nor even acceleration past the speed of light.  In fact, if we accept that relativity says nothing about what happens past the speed of light, we are free to suggest that the observable effects freeze at c. Perhaps traveling past c does nothing more than create unusual effects like disappearing objects or things freezing at event horizons until they slow back down to an “observable” speed.  We certainly don’t have enough evidence to investigate further.

But perhaps CERN has provided us with our first data point.

Time Warp