RIP Kardashev Civilization Scale

In 1964, Soviet astronomer Nikolai Kardashev proposed a model for categorizing technological civilizations.  He identified 4 levels or “Types”, simplified as follows:

Type 0 – Civilization that has not yet learned to utilize the full set of resources available to them on their home planet (e.g. oceans, tidal forces, geothermal forces, solar energy impinging upon the planet, etc.)

Type 1 – Civilization that fully harnesses, controls, and utilizes the resources of their planet.

Type 2 – Civilization that fully harnesses, controls, and utilizes the resources of their star system.

Type 3 – Civilization that fully harnesses, controls, and utilizes the resources of their galaxy.

halosphere500

As with philosophical thought, literature, art, music, and other concepts and artifacts generated by humanity, technological and scientific pursuits reflect the culture of the time.  In 1964, we were on the brink of nuclear war.  The space race was in full swing and the TV show “Star Trek” was triggering the imagination of laymen and scientists alike.  We thought in terms of conquering people and ideas, and in terms of controlling resources.  What countries are in the Soviet bloc?  What countries are under US influence?  Who has access to most of the oil?  Who has the most gold, the most uranium?

The idea of dominating the world was evident in our news and our entertainment.  Games like Risk and Monopoly were unapologetically imperialistic.  Every Bond plot was about world domination.

Today, many of us find these ideas offensive.  To start with, imperialism is an outdated concept founded on the assumption of superiority of some cultures over others.  The idea of harnessing all planetary resources is an extension of imperialistic mentality, one that adds all other life forms to the entities that we need to dominate.  Controlling planetary resources for the sake of humanity is tantamount to stealing those same resources from other species that may need them.  Further, our attempt to control resources and technology can lead to some catastrophic outcomes.  Nuclear Armageddon, grey goo, overpopulation, global warming, planetary pollution, and (human-caused) mass extinctions are all examples of potentially disastrous consequences of attempts to dominate nature or technology without fully understanding what we are doing.

I argue in “Alien Hunters Still Thinking Inside The Box (or Dyson Sphere)” that attempting to fully harness all of the energy from the sun is increasingly unnecessary and unlikely to our evolution as a species.  Necessary energy consumption per capita is flattening for developing cultures and declining for mature ones.  Technological advances allow us to get much more useful output from our devices as time goes forward.  And humanity is beginning to de-emphasize raw size and power as a desirable attribute (for example, see right sizing economic initiatives) and instead focus on the value of consciousness.

So, certainly the hallmarks of advanced civilizations are not going to be anachronistic metrics of how much energy they can harness.  So what metrics might be useful?

How about:  Have they gotten off their planet?  Have they gotten out of their solar system?  Have they gotten out of their galaxy?

Somehow, I feel that even this is misleading.  Entanglement shows that everything is interconnected.  The observer effect demonstrates that consciousness transcends matter.  So perhaps the truly advanced civilizations have learned that they do not need to physically travel, but rather mentally travel.

How about: How little of an impact footprint do they leave on their planet?

The assumption here is that advanced civilizations follow a curve like the one below, whereby early in their journey they have a tendency to want to consume resources, but eventually evolve to have less and less of a need to consume or use energy.

wigner

How about: What percentage of their effort is expended upon advancing the individual versus the society, the planetary system, or the galactic system?

or…

How about: Who cares?  Why do we need to assign a level to a civilization anyway?  Is there some value to having a master list of evolutionary stage of advanced life forms?  So that we know who to keep an eye on?  That sounds very imperialistic to me.

Of course, I am as guilty of musing about the idea of measuring the level of evolution of a species through a 2013 cultural lens as Kardashev was doing so through a 1964 cultural lens.  But still, it is 50 years hence and time to either revise or retire an old idea.

Ever Expanding Horizons

Tribal Era

tribalera200Imagine the human world tens of thousands of years ago.  A tribal community lived together, farming, hunting, trading, and taking care of each other.  There was plenty of land to support the community and as long as there were no strong forces driving them to move, they stayed where they were, content.  As far as they knew, “all that there is” was just that community and the land that was required to sustain it.  We might call this the Tribal Era.

Continental Era

continentalera200But, at some point, for whatever reason – drought, restlessness, desire for a change of scenery – another tribe moved into the first tribe’s territory.  For the first time, that tribe realized that the world was bigger than their little community.  In fact, upon a little further exploration, they realized that the boundaries of “all that there is” just expanded to the continent on which they lived, and there was a plethora of tribes in this new greater community.  The horizon of their reality just reached a new boundary and their community was now a thousand fold larger than before.

Planetary Era

planetaryera200According to researchers, the first evidence of cross-oceanic exploration was about 9000 years ago.  Now, suddenly, this human community may have been subject to an invasion of an entirely different race of people with different languages coming from a place that was previously thought to not exist.  Again, the horizon expands and “all that there is” reaches a new level, one that consists of the entire planet.

Solar Era

The Ancient Greek philosophers and astronomers recognized the existence of other solarera200planets.  Gods were thought to have come from the sun or elsewhere in the heavens, which consisted of a celestial sphere that wasn’t too far out away from the surface of our planet.

Imaginations ran wild as horizons expanded once again.

Galactic Era

galacticera200In 1610, Galileo looked through his telescope and suddenly humanity’s horizon expanded by another level.  Not only did the other planets resemble ours, but it was clear that the sun was the center of the known universe, stars were extremely far away, there were strange distant nebulae that were more than nearby clouds of debris, and the Milky Way consisted of distant stars.  In other worlds, “all that there is” became our galaxy.

Universal Era

universalera200A few centuries later, in 1922, it was time to expand our reality horizon once again, as the 100-inch telescope at Mount Wilson revealed that some of those fuzzy nebulae were actually other galaxies.  The concept of deep space and “Universe” was born and new measurement techniques courtesy of Edwin Hubble showed that “all that there is” was actually billions of times more than previously thought.

Multiversal Era

multiversalera200These expansions of “all that there is” are happening so rapidly now that we are still debating the details about one worldview, while exploring the next, and being introduced to yet another.  Throughout the latter half of the 20th century, a variety of ideas were put forth that expanded our reality horizon to the concept of many (some said infinite) parallel universes.  The standard inflationary big bang theory allowed for multiple Hubble volumes of universes that are theoretically within our same physical space, but unobservable due to the limitations of the speed of light.  Bubble universes, MWI, and many other theories exist but lack any evidence.  In 2003, Max Tegmark framed all of these nicely in his concept of 4 levels of Multiverse.

I sense one of those feelings of acceleration with the respect to the entire concept of expanding horizons, as if our understanding of “all that there is” is growing exponentially.  I was curious to see how exponential it actually was, so I took the liberty of plotting each discrete step in our evolution of awareness of “all that there is” on a logarithmic plot and guess what?

Almost perfectly exponential! (see below)

horizons

Dramatically, the trend points to a new expansion of our horizons in the past 10 years or so.  Could there really be a something beyond a multiverse of infinitely parallel universes?  And has such a concept recently been put forth?

Indeed there is and it has.  And, strangely, it isn’t even something new.  For millennia, the spiritual side of humanity has explored non-physical realities; Shamanism, Heaven, Nirvana, Mystical Experiences, Astral Travel.  Our Western scientific mentality that “nothing can exist that cannot be consistently and reliably reproduced in a lab” has prevented many of us from accepting these notions.  However, there is a new school of thought that is based on logic, scientific studies, and real data (if your mind is open), as well as personal knowledge and experience.  Call it digital physics (Fredkin), digital philosophy, simulation theory (Bostrom), programmed reality (yours truly), or My Big TOE (Campbell).  Tom Campbell and others have taken the step of incorporating into this philosophy the idea of non-material realms.  Which is, in fact, a new expansion of “all that there is.”  While I don’t particularly like the term “dimensional”, I’m not sure that we have a better descriptor.

Interdimensional Era

interdiensionalera200Or maybe we should just call it “All That There Is.”

At least until a few years from now.

Alien Hunters Still Thinking Inside The Box (or Dyson Sphere)

As those who are familiar with my writing already know, I have long thought that the SETI program was highly illogical, for a number of reason, some of which are outlined here and here.

To summarize, it is the height of anthropomorphic and unimaginative thinking to assume that ET will evolve just like we did and develop radio technology at all.  Even if they did, and followed a technology evolution similar to our own, the era of high-powered radio broadcasts should be insignificant in relation to the duration of their evolutionary history.  In our own case even, that era is almost over, as we are moving to highly networked and low-powered data communication (e.g. Wi-Fi), which is barely detectable a few blocks away, let alone light years.  And even if we happened to overlap a 100-year radio broadcast era of a civilization in our galactic neighborhood, they would still never hear us, and vice versa, because the signal level required to reliably communicate around the world becomes lost in the noise of the cosmic microwave background radiation before it even leaves the solar system.

So, no, SETI is not the way to uncover extraterrestrial intelligences.

Dyson Sphere

Some astronomers are getting a bit more creative and are beginning to explore some different ways of detecting ET.  One such technique hinges on the concept of a Dyson Sphere.  Physicist Freeman Dyson postulated the idea in 1960, theorizing that advanced civilizations will continuously increase their demand for energy, to the point where they need to capture all of the energy of the star that they orbit.  A possible mechanism for doing so could be a network of satellites surrounding the solar system and collecting all of the energy of the star.  Theoretically, a signature of a distant Dyson Sphere would be a region of space emitting no visible light but generating high levels of infrared radiation as waste.  Some astronomers have mapped the sky over the years, searching for such signatures, but to no avail.

Today, a team at Penn State is resuming the search via data from infrared observatories WISE and Spitzer.  Another group from Princeton has also joined in the search, but are using a different technique by searching for dimming patterns in the data.

I applaud these scientists who are expanding the experimental boundaries a bit.  But I doubt that Dyson Spheres are the answer.  There are at least two flaws with this idea.

First, the assumption that we will continuously need more energy is false.  Part of the reason for this is the fact that once a nation has achieved a particular level of industrialization and technology, there is little to drive further demand.  The figure below, taken from The Atlantic article “A Short History of 200 Years of Global Energy Use” demonstrates this clearly.

per-capita-energy-consumption300

In addition, technological advances make it cheaper to obtain the same general benefit over time.  For example, in terms of computing, performing capacity per watt has increased by a factor of over one trillion in the past 50 years.  Dyson was unaware of this trend because Moore’s Law hadn’t been postulated until 1965.  Even in the highly corrupt oil industry, with their collusion, lobbying, and artificial scarcity, performance per gallon of gas has steadily increased over the years.

The second flaw with the Dyson Sphere argument is the more interesting one – the assumptions around how humans will evolve.  I am sure that in the booming 1960s, it seemed logical that we would be driven by the need to consume more and more, controlling more and more powerful tools as time went on.  But, all evidence actually points to the contrary.

We are in the beginning stages of a new facet of evolution as a species.  Not a physical one, but a consciousness-oriented one.  Quantum Mechanics has shown us that objective reality doesn’t exist.  Scientists are so frightened by the implications of this that they are for the most part in complete denial.  But the construct of reality is looking more and more like it is simply data.  And the evidence is overwhelming that consciousness is controlling the body and not emerging from it.  As individuals are beginning to understand this, they are beginning to recognize that they are not trapped by their bodies, nor this apparent physical reality.

Think about this from the perspective of the evolution of humanity.  If this trend continues, why will we even need the body?

Robert Monroe experienced a potential future (1000 years hence), which may be very much in line with the mega-trends that I have been discussing on theuniversesolved.com: “No sound, it was NVC [non-vocal communication]! We made it! Humans did it! We made the quantum jump from monkey chatter and all it implied.” (“Far Journeys“)

earthWe may continue to use the (virtual) physical reality as a “learning lab”, but since we won’t really need it, neither will we need the full energy of the virtual star.  And we can let virtual earth get back to the beautiful virtual place it once was.

THIS is why astronomers are not finding any sign of intelligent life in outer space, no matter what tools they use.  A sufficiently advanced civilization does not communicate using monkey chatter, nor any technological carrier like radio waves.

They use consciousness.

So will we, some day.

Things We Can’t Feel – The Mystery Deepens

In my last blog “Things We Can’t See”, we explored the many different ways that our eyes, brains, and/or technology can fool us into seeing something that isn’t there or not seeing something that is.

So apparently, our sense of sight is not necessarily the most reliable sense in terms of identifying what is and isn’t in our objective reality.  We would probably suspect that our sense of touch is fairly foolproof; that is, if an object is “there”, we can “feel” it, right?

Not so fast.

First of all, we have a lot of the same problems with the brain as we did with the sense of sight.  The brain processes all of that sensory data from our nerve endings.  How do we know what the brain really does with that information?  Research shows that sometimes your brain can think that you are touching something that you aren’t or vice versa.  People who have lost limbs still have sensations in their missing extremities.  Hypnosis has been shown to have a significant effect in terms of pain control, which seems to indicate the mind’s capacity to override one’s tactile senses.  And virtual reality experiments have demonstrated the ability for the mind to be fooled into feeling something that isn’t there.

In addition, technology can be made to create havoc with our sense of touch, although the most dramatic of such effects are dozens of years into the future.  Let me explain…

Computer Scientist J. Storrs Hall developed the concept of a “Utility Fog.”  Imagine a “nanoscopic” object called a Foglet, which is an intelligent nanobot, capable of communicating with its peers and having arms that can hook together to form larger structures.  Trillions of these Foglets could conceivably fill a room and not be at all noticeable as long as they were in “invisible mode.”  In fact, not only might they be programmed to appear transparent to the sight, but they may be imperceptible to the touch.  This is not hard to imagine, if you allow that they could have sensors that detect your presence.  For example, if you punch your fist into a swarm of nanobots programmed to be imperceptible, they would sense your motion and move aside as you swung your fist through the air.  But at any point, they could conspire to form a structure – an impenetrable wall, for example.  And then your fist would be well aware of their existence.  In this way, technology may be able to have a dramatic effect on our complete ability to determine what is really “there.”

nanobot

But even now, long before nanobot swarms are possible, the mystery really begins, as we have to dive deeply into what is meant by “feeling” something.

Feeling is the result of a part of our body coming in contact with another object.  That contact is “felt” by the interaction between the molecules of the body and the molecules of the object.

Even solid objects are mostly empty space.  If subatomic particles, such as neutrons, are made of solid mass, like little billiard balls, then 99.999999999999% of normal matter would still be empty space.  That is, of course, unless those particles themselves are not really solid matter, in which case, even more of space is truly empty, more about which in a bit.

So why don’t solid objects like your fist slide right through other solid objects like bricks?  Because of the repulsive effect that the electromagnetic force from the electrons in the fist apply against the electromagnetic force from the electrons in the brick.

But what about that neutron?  What is it made of?  Is it solid?  Is it made of the same stuff as all other subatomic particles?

The leading theories of matter do not favor the idea that subatomic particles are like little billiard balls of differing masses.  For example, string theorists speculate that all particles are made of the same stuff; namely, vibrating bits of string.  Except that they each vibrate at different frequencies.  Problem is, string theory is purely theoretical and really falls more in the mathematical domain than the scientific domain, inasmuch as there is no supporting evidence for the theory.  If it does turn out to be true, even the neutron is mostly empty space because the string is supposedly one-dimensional, with a theoretical cross section of a Planck length.

Here’s where it gets really interesting…

Neutrinos are an extremely common yet extremely elusive particle of matter.  About 100 trillion neutrinos generated in the sun pass through our bodies every second.  Yet they barely interact at all with ordinary matter.  Neutrino capture experiments consist of configurations such as a huge underground tank containing 100,000 gallons of tetrachloroethylene buried nearly a mile below the surface of the earth.  100 billion neutrinos strike every square centimeter of the tank per second.  Yet, any particular molecule of tetrachloroethylene is likely to interact with a neutrino only once every 10E36 seconds (which is 10 billion billion times the age of the universe).

The argument usually given for the neutrino’s elusiveness is that they are massless (and therefore not easily captured by a nucleus) and charge-less (and therefore not subject to the electromagnetic force).  Then again, photons are massless and charge-less and are easily captured, to which anyone who has spent too much time in the sun can attest.  So there has to be some other reason that we can’t detect neutrinos.  Unfortunately, given the current understanding of particle physics, no good answer is forthcoming.

And then there is dark matter.  This concept is the current favorite explanation for some anomalies around orbital speeds of galaxies.  Gravity can’t explain the anomalies, so dark matter is inferred.  If it really exists, it represents about 83% of the mass in the universe, but doesn’t interact again with any of the known forces with the exception of gravity.  This means that dark matter is all around us; we just can’t see it or feel it.

So it seems that modern physics allows for all sorts of types of matter that we can’t see or feel.  When you get down to it, the reason for this is that we don’t understand what matter is at all.  According to the standard model of physics, particles should have no mass, unless there is a special quantum field that pervades the universe and gives rise to mass upon interacting with those particles.  Unfortunately, for that to have any credibility, the signature particle, the Higgs boson, would have to exist.  Thus far, it seems to be eluding even the most powerful of particle colliders.  One alternative theory of matter has it being an emergent property of particle fluctuations in the quantum vacuum.

For a variety of reasons, some of which are outlined in “The Universe – Solved!” and many others which have come to light since I wrote that book, I suspect that ultimately matter is simply a property of an entity that is described purely by data and a set of rules, driven by a complex computational mechanism.  Our attempt to discover the nature of matter is synonymous with our attempt to discover those rules and associated fundamental constants (data).

In terms of other things that we can’t perceive, new age enthusiasts might call out ghosts, spirits, auras, and all sorts of other mysterious invisible and tenuous entities.

starwarsghosts

Given that we know that things exist that we can’t perceive, one has to wonder if it might be possible for macroscopic objects, or even macroscopic entities that are driven by similar energies as humans, to be made from stuff that we can only tenuously detect, not unlike neutrinos or dark matter.  Scientists speculate about multiple dimensions and parallel universes via Hilbert Space and other such constructs.  If such things exist (and wouldn’t it be hypocritical of anyone to speculate or work out the math for such things if it weren’t possible for them to exist?), the rules that govern our interaction with them, across the dimensions, are clearly not at all understood.  That doesn’t mean that they aren’t possible.

In fact, the scientific world is filled with trends leading toward the implication of an information-based reality.

In which almost anything is possible.

Is Cosmology Heading for a Date with a Creator?

According to a recent article in New Scientist magazine,  physicists “can’t avoid a creation event.”  (sorry, you have to be a subscriber to read the full article.)  It boils down to the need to show that the universe could have been eternal into the past.  Not eternal and there needs to be a creator.  Even uber-atheist Stephen Hawking acknowledges that a beginning to the universe would be “a point of creation… where science broke down. One would have to appeal to religion and the hand of God.”

Apparently, there are three established theories for how to get around the idea of a creator of the big bang.  But cosmologist Alexander Vilenkin demonstrated last week how all of those theories now necessitate a beginning:

1. The leading idea has been the possibility that the universe has been eternally expanding (inflating).  Recent analysis, however, shows that inflation has a lower limit preventing it from being eternal in the past.

2. Another possibility was the cyclic model, but Vilenkin has shot a hole in that one as well, courtesy of the second law of thermodynamics.  Either every cycle would have to be more disordered, in which case after an infinite number of cycles, our current cycle should be heat death (it isn’t), or the universe would have to be getting bigger with each cycle, implying a creation event at some cycle in the past.

3. The final hope for the atheistic point of view was a lesser known proposal called the cosmic egg.  But Vilenkin showed last year that this could not have existed eternally due to quantum instabilities.

Is science slowly coming to terms with the idea of an intelligent designer of the universe?  The evidence is overwhelming and Occam’s Razor points to a designer, yet science clings to the anti-ID point of view as if it is a religion.

Ironic.

Yesterday’s Sci-Fi is Tomorrow’s Technology

It is the end of 2011 and it has been an exciting year for science and technology.  Announcements about artificial life, earthlike worlds, faster-than-light particles, clones, teleportation, memory implants, and tractor beams have captured our imagination.  Most of these things would have been unthinkable just 30 years ago.

So, what better way to close out the year than to take stock of yesterday’s science fiction in light of today’s reality and tomorrow’s technology.  Here is my take:

yesterdaysscifi

Time to Revise Relativity?: Part 2

In “Time to Revise Relativity: Part 1”, I explored the idea that Faster than Light Travel (FTL) might be permitted by Special Relativity without necessitating the violation of causality, a concept not held by most mainstream physicists.

The reason this idea is not well supported has to do with the fact that Einstein’s postulate that light travels the same speed in all reference frames gave rise to all sorts of conclusions about reality, such as the idea that it is all described by a space-time that has fundamental limits to its structure.  The Lorentz factor is a consequence of this view of reality, and so it’s use is limited to subluminal effects and is undefined in terms of its use in calculating relativistic distortions past c.

Lorentz Equation

So then, what exactly is the roadblock to exceeding the speed of light?

Yes, there may be a natural speed limit to the transmission of known forces in a vacuum, such as the electromagnetic force.  And there may certainly be a natural limit to the speed of an object at which we can make observations utilizing known forces.  But, could there be unknown forces that are not governed by the laws of Relativity?

The current model of physics, called the Standard Model, incorporates the idea that all known forces are carried by corresponding particles, which travel at the speed of light if massless (like photons and gluons) or less than the speed of light if they have mass (like gauge bosons), all consistent with, or derived from the assumptions of relativity.  Problem is, there is all sorts of “unfinished business” and inconsistencies with the Standard Model.  Gravitons have yet to be discovered, Higgs bosons don’t seem to exist, gravity and quantum mechanics are incompatible, and many things just don’t have a place in the Standard Model, such as neutrino oscillations, dark energy, and dark matter.  Some scientists even speculate that dark matter is due to a flaw in the theory of gravity.  So, given the incompleteness of that model, how can anyone say for certain that all forces have been discovered and that Einstein’s postulates are sacrosanct?

Given that barely 100 years ago we didn’t know any of this stuff, imagine what changes to our understanding of reality might happen in the next 100 years.  Such as these Wikipedia entries from the year 2200…

–       The ultimate constituent of matter is nothing more than data

–       A subset of particles and corresponding forces that are limited in speed to c represent what used to be considered the core of the so-called Standard Model and are consistent with Einstein’s view of space-time, the motion of which is well described by the Special Theory of Relativity.

–       Since then, we have realized that Einsteinian space-time is an approximation to the truer reality that encompasses FTL particles and forces, including neutrinos and the force of entanglement.  The beginning of this shift in thinking occurred due to the first superluminal neutrinos found at CERN in 2011.

So, with that in mind, let’s really explore a little about the possibilities of actually cracking that apparent speed limit…

For purposes of our thought experiments, let’s define S as the “stationary” reference frame in which we are making measurements and R as the reference frame of the object undergoing relativistic motion with respect to S.  If a mass m is traveling at c with respect to S, then measuring that mass in S (via whatever methods could be employed to measure it; energy, momentum, etc.) will give an infinite result.  However, in R, the mass doesn’t change.

What if m went faster than c, such as might be possible with a sci-fi concept like a “tachyonic afterburner”?  What would an observer at S see?

Going by our relativistic equations, m now becomes imaginary when measured from S because the argument in the square root of the mass correction factor is now negative.  But what if this asymptotic property really represents more of an event horizon than an impenetrable barrier?  A commonly used model for the event horizon is the point on a black hole at which gravity prevents light from escaping.  Anything falling past that point can no longer be observed from the outside.  Instead it would look as if that object froze on the horizon, because time stands still there.  Or so some cosmologists say.  This is an interesting model to apply to the idea of superluminality as mass m continues to accelerate past c.

From the standpoint of S, the apparent mass is now infinite, but that is ultimately based on the fact that we can’t perceive speeds past c.  Once something goes past c, one of two things might happen.  The object might disappear from view due to the fact that the light that it generated that would allow us to observe it can’t keep up with its speed.  Alternatively, invoking the postulate that light speed is the same in all reference frames, the object might behave like it does on the event horizon of the black hole – forever frozen, from the standpoint of S, with the properties that it had when it hit light speed.  From R, everything could be hunky dory.  Just cruising along at warp speed.  No need to say that it is impossible because mass can’t exceed infinity, because from S, the object froze at the event horizon.  Relativity made all of the correct predictions of properties, behavior, energy, and mass prior to light speed.  Yet, with this model, it doesn’t preclude superluminality.  It only precludes the ability to make measurements beyond the speed of light.

That is, of course, unless we can figure out how to make measurements utilizing a force or energy that travels at speeds greater than c.  If we could, those measurements would yield results with correction factors only at speeds relatively near THAT speed limit.

Let’s imagine an instantaneous communication method.  Could there be such a thing?

One possibility might be quantum entanglement.  John Wheeler’s Delayed Choice Quantum Eraser experiment seems to imply non-causality and the ability to erase the past.  Integral to this experiment is the concept of entanglement.  So perhaps it is not a stretch to imagine that entanglement might embody a communication method that creates some strange effects when integrated with observational effects based on traditional light and sight methods.

What would the existence of that method do to relativity?   Nothing, according to the thought experiments above.

There are, however, some relativistic effects that seem to stick, even after everything has returned to the original reference frame.  This would seem to violate the idea that the existence of an instantaneous communication method invalidates the need for relativistic correction factors applied to anything that doesn’t involve light and sight.

For example, there is the very real effect that clocks once moving at high speeds (reference frame R) exhibit a loss of time once they return to the reference frame S, fully explained by time dilation effects.  It would seem that, using this effect as a basis for a thought experiment like the twin paradox, there might be a problem with the event horizon idea.  For example, let us imagine Alice and Bob, both aged 20.  After Alice travels at speed c to a star 10 light years away and returns, her age should still be 20, while Bob is now 40.  If we were to allow superluminal travel, it would appear that Alice would have to get younger, or something.  But, recalling the twin paradox, it is all about the relative observations that were made by Bob in reference frame S, and Alice, in reference frame R, of each other.  Again, at superluminal speeds, Alice may appear to hit an event horizon according to Bob.  So, she will never reduce her original age.

But what about her?  From her perspective, her trip is instantaneous due to an infinite Lorentz contraction factor; hence she doesn’t age.  If she travels at 2c, her view of the universe might hit another event horizon, one that prevents her from experiencing any Lorentz contraction beyond c; hence, her trip will still appear instantaneous, no aging, no age reduction.

So why would an actual relativistic effect like reduced aging, occur in a universe where an infinite communication speed might be possible?  In other words, what would tie time to the speed of light instead of some other speed limit?

It may be simply because that’s the way it is.  It appears that relativistic equations may not necessarily impose a barrier to superluminal speeds, superluminal information transfer, nor even acceleration past the speed of light.  In fact, if we accept that relativity says nothing about what happens past the speed of light, we are free to suggest that the observable effects freeze at c. Perhaps traveling past c does nothing more than create unusual effects like disappearing objects or things freezing at event horizons until they slow back down to an “observable” speed.  We certainly don’t have enough evidence to investigate further.

But perhaps CERN has provided us with our first data point.

Time Warp

Abiotic Oil or Panspermia – Take Your Pick

Astronomers from the University of Hong Kong investigated infrared emissions from deep space and everywhere they look they find signatures of complex organic matter.

You read that right.  Complex organic molecules; the kind that are the building blocks of life!

How they are created in the stellar infernos is a complete mystery.  The chemical structure of these molecules is similar to that of coal or oil, which, according to mainstream science, come from ancient biological material.

So, there seem to be only two explanations, each of which has astounding implications.

One possibility is that the molecules responsible for these spectral signatures are truly organic, in the biological “earth life” sense of the world.  I don’t think I have to point out the significance of that possibility.  It would certainly give new credence to the panspermia theory, suggesting that we are but distant relatives or descendents of life forms that permeate the universe.  ETs are our brothers.

The other possibility is that these molecules are organic but not of biological origin.  Instead, they are somehow created within the star itself.  Given that they resemble organic molecules in coal and oil, it would seem to indicate that if such molecules can be generated non-biologically in stars, and the earth was created from the same protoplanetary disk that formed our sun, oil and coal are probably also not created from biological organic material.

In other words, this discovery seems to lend a lot of support to the abiotic oil theory.

That or we have evidence that we are not alone.

Either way, a significant find.

Buried in the news.

Things We Can Never Comprehend

Have you ever wondered what we don’t know?  Or, to put it another way, how many mysteries of the universe are still to be discovered?

To take this thought a step further, have you ever considered that there may be things that we CAN’T understand, no matter how hard we try?

This idea may be shocking to some, especially to those scientists who believe that we are nearing the “Grand Unified Theory”, or “Theory of Everything” that will provide a simple and elegant solution to all forces, particles, and concepts in science.  Throughout history, the brightest of minds have been predicting the end of scientific inquiry.  In 1871, James Clerk Maxwell lamented the sentiment of the day which he represented by the statement “in a few years, all great physical constants will have been approximately estimated, and that the only occupation which will be left to men of science will be to carry these measurements to another place of decimals.”

Yet, why does it always seem like the closer we get to the answers, the more monkey wrenches get thrown in the way?  In today’s world, these include strange particles that don’t fit the model.  And dark matter.  And unusual gravitational aberrations in distant galaxies.

Perhaps we need a dose of humility.  Perhaps the universe, or multiverse, or whatever term is being used these days to denote “everything that is out there” is just too far beyond our intellectual capacity.  Before you call me out on this heretical thought, consider…

The UK’s Astronomer Royal Sir Martin Rees points out that “a chimpanzee can’t understand quantum mechanics.”  Despite the fact that Richard Feynman claimed that nobody understands quantum mechanics, as Michael Brooks points out in his recent article “The limits of knowledge: Things we’ll never understand”, no matter how hard they might try, the comprehension of something like Quantum Mechanics is simply beyond the capacity of certain species of animals.  Faced with this realization and the fact that anthropologists estimate that the most recent common ancestor of both humans and chimps (aka CHLCA) was about 6 million years ago, we can draw a startling conclusion:

There are certainly things about our universe and reality that are completely beyond our ability to comprehend!

My reasoning is as follows. Chimps are certainly at least more intelligent than the CHLCA; otherwise evolution would be working in reverse.  As an upper bound of intelligence, let’s say that CHLCA and chimps are equivalent.  Then, CHLCA was certainly not able to comprehend QM (nor relativity, nor even Newtonian physics), but upon evolving into humans over 8 million years, our new species was able to comprehend these things.  8 million years represents 0.06% of the entire age of the universe (according to what we think we know).  That means that for 99.94% of the total time that the universe and life was evolving up to the current point in time, the most advanced creature on earth was incapable of understand the most rudimentary concepts about the workings of reality and the universe.  And yet, are we to suppose that in the last 0.06% of the time, a species has evolved that can understand everything?  I’m sure you see how unlikely that is.

What if our universe was intelligently designed?  The same argument would probably hold.  For some entity to be capable of creating a universe that continues to baffle us no matter how much we think we understand, that entity must be far beyond our intelligence, and therefore has utilized, in the design, concepts that we can’t hope to understand.

Our only chance for being supremely capable of understanding our world would lie in the programmed reality model.  If the creator of our simulation was us, or even an entity a little more advanced than us, it could lead us along a path of exploration and knowledge discovery that just always seems to be on slightly beyond our grasp.  Doesn’t that idea feel familiar?

chimpscratching185 humanscratching185

Why Worry about ET, Stephen Hawking?

Famous astrophysicist, Stephen Hawking, made the news recently when he called for us to stop attempting to contact ET.  No offense to Dr. Hawking and other scientists who have similar points of view, but I find the whole argument about dangerous ET’s, to use a Vulcan phrase, “highly illogical.”

First of all, there is the whole issue around the ability to contact ET.  As I showed in my post “Could Gliesians be Watching Baywatch“, it is virtually impossible to communicate with any extraterrestrial civilization beyond our solar system without significant power and antenna gain.  The world’s most powerful radio astronomy dish at Arecibo has a gain of 60 dB, which means that it could barely detect a 100 kilowatt non-directional signal generated from a planet 20 light years away, such as Gliese 581g, but only if it were pointed right at it.  More to the point, what are the odds that such a civilization would be at the right level of technology to be communicating with us, using a technique that overlaps what we know?

Using the famous Drake equation, N=R*·fp·ne·fl·fi·fc·L, with the following best estimates for parameters: R*= 10/year, fp= .5, ne= 2, fl= .5, fi= .001 (highly speculative), fc= .01, L=50 (duration in years of the radio transmitting period of a civilization), we get .0025 overlapping radio wave civilizations per galaxy.  But if you then factor in the (im)probabilities of reaching those star systems (I used a megawatt of power into an Arecibo-sized radio telescope), the likelihood of another “advanced technology” civilization even developing radio waves, the odds that we happen to be  pointing our radio telescope arrays at each other at the same time, and the odds that we are using the same frequency, we get a probability of 1.25E-22.  For those who don’t like scientific notation, how about .0000000000000000000000125.  (Details will be in a forthcoming paper that I will post on this site.  I’ll replace this text with the link once it is up)

So why is Stephen Hawking worried about us sending a message that gets intercepted by ET?  Didn’t anyone do the math?

But there is a second science/sci-fi meme that I also find highly illogical.  And that is that malevolent ETs may want to mine our dear old earth for some sort of mineral.  Really?  Are we to believe that ET has figured out how to transcend relativity, exceed the speed of light, power a ship across the galaxy using technology far beyond our understanding, but still have an inability to master the control of the elements?  We have been transmuting elements for 70 years.  Even gold was artificially created by bombarding mercury atoms with neutrons as far back as 1941.  Gold could be created in an accelerator or nuclear reactor at any time, although to be practical from an economic standpoint, we may need a few years.  However, if gold, or any particular element, was important enough to be willing to fly across the galaxy and repress another civilization for, then economics should not be an issue.  Simple nuclear technology can create gold far easier than it can power a spaceship at near light speeds through space.

Even if our space traveling friends need something on Earth that can’t possibly be obtained through technology, would they really be likely to be so imperialistic as to invade and steal our resources?  From the viewpoint of human evolution, as technology and knowledge has developed, so have our ethical sensibilities and social behavior.  Of course, there is still “Jersey Shore” and “Jackass,” but by and large we have advanced our ethical values along with our technological advances and there is no reason to think that these wouldn’t also go hand in hand with any other civilization.

So while I get that science fiction needs to have a compelling rationale for ET invasion because it is a good story, I fail to understand the fear that some scientists have that extraterrestrials will actually get all Genghis Khan on us.