Time to Revise Relativity?: Part 2

In “Time to Revise Relativity: Part 1”, I explored the idea that Faster than Light Travel (FTL) might be permitted by Special Relativity without necessitating the violation of causality, a concept not held by most mainstream physicists.

The reason this idea is not well supported has to do with the fact that Einstein’s postulate that light travels the same speed in all reference frames gave rise to all sorts of conclusions about reality, such as the idea that it is all described by a space-time that has fundamental limits to its structure.  The Lorentz factor is a consequence of this view of reality, and so it’s use is limited to subluminal effects and is undefined in terms of its use in calculating relativistic distortions past c.

Lorentz Equation

So then, what exactly is the roadblock to exceeding the speed of light?

Yes, there may be a natural speed limit to the transmission of known forces in a vacuum, such as the electromagnetic force.  And there may certainly be a natural limit to the speed of an object at which we can make observations utilizing known forces.  But, could there be unknown forces that are not governed by the laws of Relativity?

The current model of physics, called the Standard Model, incorporates the idea that all known forces are carried by corresponding particles, which travel at the speed of light if massless (like photons and gluons) or less than the speed of light if they have mass (like gauge bosons), all consistent with, or derived from the assumptions of relativity.  Problem is, there is all sorts of “unfinished business” and inconsistencies with the Standard Model.  Gravitons have yet to be discovered, Higgs bosons don’t seem to exist, gravity and quantum mechanics are incompatible, and many things just don’t have a place in the Standard Model, such as neutrino oscillations, dark energy, and dark matter.  Some scientists even speculate that dark matter is due to a flaw in the theory of gravity.  So, given the incompleteness of that model, how can anyone say for certain that all forces have been discovered and that Einstein’s postulates are sacrosanct?

Given that barely 100 years ago we didn’t know any of this stuff, imagine what changes to our understanding of reality might happen in the next 100 years.  Such as these Wikipedia entries from the year 2200…

–       The ultimate constituent of matter is nothing more than data

–       A subset of particles and corresponding forces that are limited in speed to c represent what used to be considered the core of the so-called Standard Model and are consistent with Einstein’s view of space-time, the motion of which is well described by the Special Theory of Relativity.

–       Since then, we have realized that Einsteinian space-time is an approximation to the truer reality that encompasses FTL particles and forces, including neutrinos and the force of entanglement.  The beginning of this shift in thinking occurred due to the first superluminal neutrinos found at CERN in 2011.

So, with that in mind, let’s really explore a little about the possibilities of actually cracking that apparent speed limit…

For purposes of our thought experiments, let’s define S as the “stationary” reference frame in which we are making measurements and R as the reference frame of the object undergoing relativistic motion with respect to S.  If a mass m is traveling at c with respect to S, then measuring that mass in S (via whatever methods could be employed to measure it; energy, momentum, etc.) will give an infinite result.  However, in R, the mass doesn’t change.

What if m went faster than c, such as might be possible with a sci-fi concept like a “tachyonic afterburner”?  What would an observer at S see?

Going by our relativistic equations, m now becomes imaginary when measured from S because the argument in the square root of the mass correction factor is now negative.  But what if this asymptotic property really represents more of an event horizon than an impenetrable barrier?  A commonly used model for the event horizon is the point on a black hole at which gravity prevents light from escaping.  Anything falling past that point can no longer be observed from the outside.  Instead it would look as if that object froze on the horizon, because time stands still there.  Or so some cosmologists say.  This is an interesting model to apply to the idea of superluminality as mass m continues to accelerate past c.

From the standpoint of S, the apparent mass is now infinite, but that is ultimately based on the fact that we can’t perceive speeds past c.  Once something goes past c, one of two things might happen.  The object might disappear from view due to the fact that the light that it generated that would allow us to observe it can’t keep up with its speed.  Alternatively, invoking the postulate that light speed is the same in all reference frames, the object might behave like it does on the event horizon of the black hole – forever frozen, from the standpoint of S, with the properties that it had when it hit light speed.  From R, everything could be hunky dory.  Just cruising along at warp speed.  No need to say that it is impossible because mass can’t exceed infinity, because from S, the object froze at the event horizon.  Relativity made all of the correct predictions of properties, behavior, energy, and mass prior to light speed.  Yet, with this model, it doesn’t preclude superluminality.  It only precludes the ability to make measurements beyond the speed of light.

That is, of course, unless we can figure out how to make measurements utilizing a force or energy that travels at speeds greater than c.  If we could, those measurements would yield results with correction factors only at speeds relatively near THAT speed limit.

Let’s imagine an instantaneous communication method.  Could there be such a thing?

One possibility might be quantum entanglement.  John Wheeler’s Delayed Choice Quantum Eraser experiment seems to imply non-causality and the ability to erase the past.  Integral to this experiment is the concept of entanglement.  So perhaps it is not a stretch to imagine that entanglement might embody a communication method that creates some strange effects when integrated with observational effects based on traditional light and sight methods.

What would the existence of that method do to relativity?   Nothing, according to the thought experiments above.

There are, however, some relativistic effects that seem to stick, even after everything has returned to the original reference frame.  This would seem to violate the idea that the existence of an instantaneous communication method invalidates the need for relativistic correction factors applied to anything that doesn’t involve light and sight.

For example, there is the very real effect that clocks once moving at high speeds (reference frame R) exhibit a loss of time once they return to the reference frame S, fully explained by time dilation effects.  It would seem that, using this effect as a basis for a thought experiment like the twin paradox, there might be a problem with the event horizon idea.  For example, let us imagine Alice and Bob, both aged 20.  After Alice travels at speed c to a star 10 light years away and returns, her age should still be 20, while Bob is now 40.  If we were to allow superluminal travel, it would appear that Alice would have to get younger, or something.  But, recalling the twin paradox, it is all about the relative observations that were made by Bob in reference frame S, and Alice, in reference frame R, of each other.  Again, at superluminal speeds, Alice may appear to hit an event horizon according to Bob.  So, she will never reduce her original age.

But what about her?  From her perspective, her trip is instantaneous due to an infinite Lorentz contraction factor; hence she doesn’t age.  If she travels at 2c, her view of the universe might hit another event horizon, one that prevents her from experiencing any Lorentz contraction beyond c; hence, her trip will still appear instantaneous, no aging, no age reduction.

So why would an actual relativistic effect like reduced aging, occur in a universe where an infinite communication speed might be possible?  In other words, what would tie time to the speed of light instead of some other speed limit?

It may be simply because that’s the way it is.  It appears that relativistic equations may not necessarily impose a barrier to superluminal speeds, superluminal information transfer, nor even acceleration past the speed of light.  In fact, if we accept that relativity says nothing about what happens past the speed of light, we are free to suggest that the observable effects freeze at c. Perhaps traveling past c does nothing more than create unusual effects like disappearing objects or things freezing at event horizons until they slow back down to an “observable” speed.  We certainly don’t have enough evidence to investigate further.

But perhaps CERN has provided us with our first data point.

Time Warp

Abiotic Oil or Panspermia – Take Your Pick

Astronomers from the University of Hong Kong investigated infrared emissions from deep space and everywhere they look they find signatures of complex organic matter.

You read that right.  Complex organic molecules; the kind that are the building blocks of life!

How they are created in the stellar infernos is a complete mystery.  The chemical structure of these molecules is similar to that of coal or oil, which, according to mainstream science, come from ancient biological material.

So, there seem to be only two explanations, each of which has astounding implications.

One possibility is that the molecules responsible for these spectral signatures are truly organic, in the biological “earth life” sense of the world.  I don’t think I have to point out the significance of that possibility.  It would certainly give new credence to the panspermia theory, suggesting that we are but distant relatives or descendents of life forms that permeate the universe.  ETs are our brothers.

The other possibility is that these molecules are organic but not of biological origin.  Instead, they are somehow created within the star itself.  Given that they resemble organic molecules in coal and oil, it would seem to indicate that if such molecules can be generated non-biologically in stars, and the earth was created from the same protoplanetary disk that formed our sun, oil and coal are probably also not created from biological organic material.

In other words, this discovery seems to lend a lot of support to the abiotic oil theory.

That or we have evidence that we are not alone.

Either way, a significant find.

Buried in the news.

Time to Revise Relativity?: Part 1

Special Relativity.

Causality.

Faster than light (FTL) travel.

Most physicists says that you can only hope for at most two of these three concepts to hold.

Special Relativity has the advantage of 100 years of supporting experimental evidence.

Causality has the advantage of 1000s of years of philosophic thought, and daily experience (at least until very recently – see Rewriting the Past)

Which seems to be bad news for faster than light travel.  But we all so much want FTL travel to be true.  How else are we supposed to communicate with ET?

Well, Special Relativity may have received its first chink in the armor.  Particle physicists at CERN recently released a report on the experimental evidence of FTL neutrinos.  The 6-sigma quality factor reported implies that the margin of error for this experiment is insignificant, meaning that these results may need to be taken seriously.

So, which concept falls by the wayside: Special Relativity (sorry, Albert)?    Or Causality (sorry, Aristotle)?  Alternatively, maybe the “2 outta 3” rule needs revision.

As usual, I have an opinion.

And it is…

1. Special Relativity holds for the moment.  But we need to stop using circular logic for relativistic effects.  We need to stop drawing FTL paths on Minkowski diagrams that are based on the assumption that FTL is impossible.  And, finally, we have to come to terms with the fact that Special Relativity has to do with subluminal speeds and is UNDEFINED at FTL.

2. Causality holds for the moment.  At least in the context of our conventional space-time.  Throw in inter-Hilbert Space travel or Programmed Reality and all bets are off for Causality. (again see Rewriting the Past for more on the latter)

3. Given the caveats in #1, maybe we can get 3 outta 3.

Here’s just one example where it seems to fit:

Imagine a supersonic jet travelling at twice the speed of sound (2S meters/second) in the land of the blind.  A blind observer stands at 10*S meters from the jet at t=0.  At t=0, an audible event (call it Event A, the cause) occurs on the jet, such as an explosion on board the plane.  The sound waves from Event A reach the observer in 10 seconds.  At t=1 second, the entire jet explodes as the gas tanks catch fire (Event B, the effect).  At t=1, the jet is 8*S meters from the observer since it is traveling at 2S, so the observer hears Event B eight seconds later.  In other words, the observer hears event B at t=9 and event A at t=10.  Therefore the observer observes the effect before the cause.

But that doesn’t mean that the effect happened before the cause.  It only appeared to happen that way in the observer’s reference frame.  Similarly, anyone on the jet (who could actually hear things happening outside) would observe a full sequence of events happening backwards in time.  Is this time travel?  No.  No one is going back in time.  They are just experiencing a sequence of events in reverse chronological order happening in someone else’s reference frame.  Is there any reason to assume that the same arguments would not also hold in the domain of light?

In fact, the same thing might happen if you hopped aboard the tachyonic neutrino express.  First of all, I should note that there is some debate about this whole idea of time unfolding in reverse at superluminal speeds.  Much of it stems from the nature of the Lorentz factor:
lorentzfactor

This is the factor that gets applied to time and distance to calculate time dilation and Lorentz contraction effects at relativistic speeds.  It is also the factor applied to mass in general relativity.  It can easily be seen that as the velocity approaches c (the speed of light), the factor under the square root sign approaches zero, causing the Lorentz factor to approach infinity.  For this reason, time stands still, mass goes to infinity, and the apparent size of the rest of the universe shrinks to zero at the speed of light.  Or, more accurately, “apparent size” as you would SEE it.  But, what happens if you go past the speed of light?  In that case, the factor under the square root sign is negative.  For mathematics, this is not allowed for real numbers.  However, trigonometry has a trick, which is to define an entity i that, by definition, is the square root of -1.  Numbers containing i are considered “imaginary” or complex numbers.  In the real world, these numbers actually have a great deal of use in fields like electrical engineering, where they are used to determine the phase between periodic signals, or in physics, where they are used to determine the relative angle between field vectors.  But what they might mean to relativity is really anybody’s guess.  But it is for this reason that many physicists claim that you can’t accelerate past light speed; that is, that it would necessitate mass exceeding infinity or becoming “imaginary”.  Thus, the entire idea of traveling back in time is just one interpretation of what happens when the Lorentz factor goes imaginary.

So, let’s go with that idea on our tachyonic neutrino express, for the moment.  If you had hurtled through space superluminally in 1804 toward Aaron Burr and Alexander Hamilton, you would watch Hamilton “fall up” into a standing position, the bullet flying out of his stomach and back into Aaron Burr’s gun.  The assassination would still have taken place in their reference frame.  Once you arrived in Weehawkin, NJ and got off the transport, your reference frame would have shifted back to theirs.

One might wonder what happens when you land.  Does the sequence of events go forward again, in which case you could predict the future?  No, that would truly violate causality.  What happens is that you have to decelerate to stop, and as you approach light speed, the backwards time effect slows down.  When you cross over into subluminal, it reverses and the events start forward again from whatever point in the “past” was hit at light speed.  Then, you get to watch the events unfold again in the normal temporal direction.  By the time you decelerate and land, you are at the same point in time as Burr’s reference frame, well ahead of the event that you just witnessed.  Hamilton would be dead, of course.  No time travel, no ability to interact with the past.  No grandfather paradox to solve.  All relativity equations still make sense, from the standpoint of the observations that we can make via known observational methods.  We would still experience time dilation and Lorentz contraction up until we hit light speed.  After that, what happens is anybody’s guess.  But I have a theory.

It’s just going to have to wait until Part 2.

einstein_raspberry185 timewarp185

Smart Phones as Transformative Devices

I live in Southern California, where, at any point in time, about 1 out of every 2 people are staring at their phone.  As a long time iPhone owner, I have to admit that I also fall into that category.  Smart phones are simply so enticing – camera, stock ticker, weather forecast, stored music, videos, and photos, GPS, maps, email, texting, twitter, facebook, games, radio rebroadcasts, internet, newpapers, webcams, and so much more.  What’s not to love?

The internet is often hailed as a transformative invention, which it certainly was.  But it kind of pales in comparison to that Droid in your pocket.  After all, the smart phone includes the internet at your fingertips, which, by itself is transformative in how people interact.  Instead of having to call your buddy after you get home and look up the factoid that you argued about at the bar, now you can settle immediately.  But, as the web app is just one of the thousands of apps that can be stored on the phone, it stands to reason that transformative nature of the smart phone can be much more than the web.

For one thing, there is the impact on existing products and services.  Who needs GPS anymore, when you have an iPhone?  Who needs to hear terrestrial radio stations in your car when you can stream Pandora channels tailored to your interests.  Pagers? – a thing of the past.  With all of the market data available at your fingertips and mobile trading easily accessible, do we need the financial section of the newspaper any more?  Or stockbrokers?  While consulting at a large toy manufacturer recently, it was observed that people use smart phones to comparison shop on the fly.  You’re standing in front of a camera at an electronic superstore and in seconds you can determine if their competitor sells it cheaper.  Macy’s doesn’t have your size of that perfect shirt you found in the store?  Check online and find out who does.  I’m less inclined to stay at home to watch a game when I know I can keep track of my team at any time.  Don’t need to carry a pen to write anything down when I can take notes on my phone.  Shazam has saved me tons of time trying to figure out what that song was that I just heard on the radio.

But it’s not all good.

How many deaths are attributed to texting and driving?  Reuters estimates over 2000 per year and growing.  Celebrity plastic surgeon Dr. Frank Ryan drove off the Pacific Coast Highway while texting about his dog last year.

Still, these are all relatively small impacts to our society.  The real transformation is in terms of socialization.  At a glance, you can determine who among your friends is nearby where you are dining or drinking, potentially enabling slightly higher socialization.  But, to come back to my original point, what about all of those people starting at their phones all day?  If you are at a restaurant with your family or friends, but are obsessed with twittering, you aren’t really getting much out of the social outing.  When was the last time you made eye contact with someone walking down the street?  It’s kind of difficult if one or both people are staring at the device in their hand.  Maybe you just walked past the person that could become the love of your life.  You’ll never know it.  Maybe you just passed a former colleague who knows of the perfect new job for you.  Opportunity missed.  I even think that people are losing the ability to think.  Some of the best daydreaming, the best brainstorms, occur when you are out and about and simply thinking.  That doesn’t happen much anymore.  Standing at the curb waiting for the walk sign?  Might as well check email.  Waiting for an elevator?  Might as well see what’s going on on Facebook.  Sitting at a stoplight?  Might as well see if anyone responded to my last tweet.

We are doing less reading, more microblogging.  Less thinking, more context switching.  One has to assume that this will impact ideas, innovation, creativity.

Don’t get me wrong.  The last thing I am is a Luddite.  I embrace technology, I love technology.  For $10 I can download a Groovebox app for my iPad, the equivalent of which used to cost $600 and take up rack space.  I can’t wait to “goggle in” in “Snow Crash” parlance, and experience other realities.  But I also can’t help but wonder what we have lost whenever I watch two people crossing a street collide mid-intersection because they are both texting.

Oops, got a text message, gotta run…

paristextanddrive185

Things We Can Never Comprehend

Have you ever wondered what we don’t know?  Or, to put it another way, how many mysteries of the universe are still to be discovered?

To take this thought a step further, have you ever considered that there may be things that we CAN’T understand, no matter how hard we try?

This idea may be shocking to some, especially to those scientists who believe that we are nearing the “Grand Unified Theory”, or “Theory of Everything” that will provide a simple and elegant solution to all forces, particles, and concepts in science.  Throughout history, the brightest of minds have been predicting the end of scientific inquiry.  In 1871, James Clerk Maxwell lamented the sentiment of the day which he represented by the statement “in a few years, all great physical constants will have been approximately estimated, and that the only occupation which will be left to men of science will be to carry these measurements to another place of decimals.”

Yet, why does it always seem like the closer we get to the answers, the more monkey wrenches get thrown in the way?  In today’s world, these include strange particles that don’t fit the model.  And dark matter.  And unusual gravitational aberrations in distant galaxies.

Perhaps we need a dose of humility.  Perhaps the universe, or multiverse, or whatever term is being used these days to denote “everything that is out there” is just too far beyond our intellectual capacity.  Before you call me out on this heretical thought, consider…

The UK’s Astronomer Royal Sir Martin Rees points out that “a chimpanzee can’t understand quantum mechanics.”  Despite the fact that Richard Feynman claimed that nobody understands quantum mechanics, as Michael Brooks points out in his recent article “The limits of knowledge: Things we’ll never understand”, no matter how hard they might try, the comprehension of something like Quantum Mechanics is simply beyond the capacity of certain species of animals.  Faced with this realization and the fact that anthropologists estimate that the most recent common ancestor of both humans and chimps (aka CHLCA) was about 6 million years ago, we can draw a startling conclusion:

There are certainly things about our universe and reality that are completely beyond our ability to comprehend!

My reasoning is as follows. Chimps are certainly at least more intelligent than the CHLCA; otherwise evolution would be working in reverse.  As an upper bound of intelligence, let’s say that CHLCA and chimps are equivalent.  Then, CHLCA was certainly not able to comprehend QM (nor relativity, nor even Newtonian physics), but upon evolving into humans over 8 million years, our new species was able to comprehend these things.  8 million years represents 0.06% of the entire age of the universe (according to what we think we know).  That means that for 99.94% of the total time that the universe and life was evolving up to the current point in time, the most advanced creature on earth was incapable of understand the most rudimentary concepts about the workings of reality and the universe.  And yet, are we to suppose that in the last 0.06% of the time, a species has evolved that can understand everything?  I’m sure you see how unlikely that is.

What if our universe was intelligently designed?  The same argument would probably hold.  For some entity to be capable of creating a universe that continues to baffle us no matter how much we think we understand, that entity must be far beyond our intelligence, and therefore has utilized, in the design, concepts that we can’t hope to understand.

Our only chance for being supremely capable of understanding our world would lie in the programmed reality model.  If the creator of our simulation was us, or even an entity a little more advanced than us, it could lead us along a path of exploration and knowledge discovery that just always seems to be on slightly beyond our grasp.  Doesn’t that idea feel familiar?

chimpscratching185 humanscratching185

Is LIDA, the Software Bot, Really Conscious?

Researchers from the Cognitive Computing Research Group (CCRG) at the University of Memphis are developing a software bot known as LIDA (Learning Intelligent Distribution Agent) with what they believe to be cognition or conscious processes.  That belief rests on the idea that LIDA is modeled on a software architecture that mirrors what some believe to be the process of consciousness, called GWT, or Global Workspace Theory.  For example, LIDA follows a repetitive looping process that consists of taking in sensory input, writing it to memory, kicking off a process that scans this data store for recognizable events or artifacts, and, if something is recognized, it is broadcast to the global workspace of the system in a similar manner to the GWT model.  Timings are even tuned to more or less match human reaction times and processing delays.

I’m sorry guys, but just because you have designed a system to model the latest theory of how sensory processing works in the brain does not automatically make it conscious.  I could write an Excel macro with forced delays and process flows that resemble GWT.  Would that make my spreadsheet conscious?  I don’t THINK so.  Years ago I wrote a trading program that utilized the brain model du jour, known as neural networks.  Too bad it didn’t learn how to trade successfully, or I would be golfing tomorrow instead of going to work.  The fact is, it was entirely deterministic, as is LIDA, and there is no more reason to suspect that it was conscious than an assembly line at an automobile factory.

Then again, the standard scientific view (at least that held by most neuroscientists and biologists) is that our brain processing is also deterministic, meaning that, given the exact set of circumstances two different times (same state of memories in the brain, same set of external stimuli), the resulting thought process would also be exactly the same.  As such, so they would say, consciousness is nothing more than an artifact of the complexity of our brain.  An artifact?  I’m an ARTIFACT?

Following this reasoning from a logical standpoint, one would have to conclude that every living thing, including bacteria, has consciousness. In that view of the world, it simply doesn’t make sense to assert that there might be some threshold of nervous system complexity, above which an entity is conscious and below which it is not.  It is just a matter of degree and you can only argue about aspects of consciousness in a purely probabilistic sense; e.g. “most cats probably do not ponder their own existence.”  Taking this thought process a step further, one has to conclude that if consciousness is simply a by-product of neural complexity, then a computer that is equivalent to our brains in complexity must also be conscious.  Indeed, this is the position of many technologists who ponder artificial intelligence, and futurists, such as Ray Kurzweil.  And if this is the case, by logical extension, the simplest of electronic circuits is also conscious, in proportion to the degree in which bacteria is conscious in relation to human consciousness.  So, even an electronic circuit known as a flip-flop (or bi-stable multivibrator), which consists of a few transistors and stores a single bit of information, is conscious.  I wonder what it feels like to be a flip-flop?

Evidence abounds that there is more to consciousness than a complex system.  For one particular and very well research data point, check out Pim van Lommel’s book “Consciousness Beyond Life.”  Or my book “The Universe – Solved!”

My guess is that consciousness consists of the combination of a soul and a processing component, like a brain, that allows that soul to experience the world.  This view is very consistent with that of many philosophers, mystics, and shamans throughout history and throughout the world (which confluence of consistent yet independent thought is in itself very striking).  If true, a soul may someday make a decision to occupy a machine of sufficient complexity and design to experience what it is like to be the “soul in a machine”.  When that happens, we can truly say that the bot is conscious.  But it does not make sense to consider consciousness a purely deterministic emergent property.

hal9000_185

Cold Fusion Heats Up

People generally associate the idea of cold fusion with electrochemists Stanley Pons and Martin Fleischmann.  However, similar experiments to the ones that led to their momentous announcement and equally momentous downfall were reported as far back as the 1920s.  Austrian scientists Friedrich Paneth and Kurt Peters reported the fusion of hydrogen into helium via a palladium mesh.  Around the same time, Swedish scientist J. Tandberg announced the same results from an elecrolysis experiment using hydrogen and palladium.

Apparently, everyone forgot about those experiments when in 1989, Stanley Pons and Martin Fleischmann from the University of Utah astonished the world with their announcement of a cold fusion experimental result.  Prior to this it was considered impossible to generate a nuclear fusion reaction at anything less than the temperatures found at the core of the sun.  Standard nuclear reaction equations required temperatures in the millions of degrees to generate the energy needed to fuse light atomic nuclei together into heavier elements, in the process releasing more energy than went into the reaction.  Pons and Fleischmann, however, claimed to generate nuclear reactions at room temperatures via a reaction that generate excess energy from an electrolysis reaction with heavy water (deuterium) and palladium, similar to those in the 1920s.

When subsequent experiments initially failed to reproduce their results, they were ridiculed by the scientific community, even to the point of driving them to leave their jobs and their country, and continuing their research in France.  But, since then, despite the fact that the cultish skeptic community declared that no one was able to repeat their experiment, nearly 15,000 similar experiments have been conducted, most of which have replicated cold fusion, including those done by scientists from Oak Ridge National Laboratory and the Russian Academy of Science.

According to a 50-page report on the recent state of cold fusion by Steven Krivit and Nadine Winocur, the effect has been reproduced at a rate of 83%.  “Experimenters in Japan, Romania, the United States, and Russia have reported a reproducibility rate of 100 percent.” (Plotkin, Marc J. “Cold Fusion Heating Up — Pending Review by U.S. Department of Energy.” Pure Energy Systems News Service, 27 March, 2004.)  In 2005, table top cold fusion was reported at UCLA utilizing crystals and deuterium and confirmed by Rensselaer Polytechnic Institute in 2006.  In 2007, a conference at MIT concluded that with 3,000+ published studies from around the world, “the question of whether Cold Fusion is real is not the issue.  Now the question is whether or not it can be made commercially viable, and for that, some serious funding is needed.” (Wired; Aug. 22, 2007)  Still, the mainstream scientific community covers their ears, shuts their eyes, and shakes their heads.

So now we have the latest demonstration of cold fusion, courtesy of Italian scientists Andrea Rossi and Sergio Focardi from the University of Bologna, who announced last month that they developed a cold fusion device capable of producing 12,400 W of heat power with an input of just 400 W.

The scientific basis for a cold fusion reaction will be discovered.  The only question is when.

coldfusion185

Explaining Daryl Bem’s Precognition

Dr. Daryl Bem, Professor Emeritus of Psychology at Cornell University recently published an astounding paper in the Journal of Personality and Social Psychology called “Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect.”  In plain English, he draws on the results of eight years of scientific research to prove that precognition exists.  His research techniques utilized proven scientific methods, such as double blind studies.  According to New Scientist magazine, in each case, he reversed the sequence of well-studied psychological phenomena, so that “the event generally interpreted as the cause happened after the tested behaviour rather than before it.”  Across all of the studies, the probability of these results occurring by chance and not due to a real precognitive effect was calculated to be about 1 in 100 billion.

This little scientific tidbit went viral quickly with the Twitterverse and Reddit communities posting and blogging prolifically about it.  We have to commend the courage that Dr. Bem had in submitting such an article and that the APA (American Psychological Association) had in accepting it for publication.  Tenures, grants, and jobs have been lost for far less of an offense to the often closed-minded scientific/academic community.  Hopefully, this will open doors to a greater acceptance of Dean Radin’s work on other so-called “paranormal” effects as well as Pim van Lommel’s research on Near Death Experiences.

More to the point, though, this has many scientists scratching their heads.  What could it mean about our reality?  Quantum physicists say that reality doesn’t really exist anyway, but most scientists from other fields have compartmentalized such ideas to a tiny corner of their awareness labelled “quantum effects that do not apply to the macroscopic world.”  Guess what?  There isn’t a line demarking quantum and macroscopic, so we need to face the facts.  The world isn’t as it seems and Daryl Bern’s research is probably just the tip of the iceberg.

OK, what could explain this?

Conventional wisdom would have to conclude that we do not have free will.  Let’s take a particular experiment to see why:

“In one experiment, students were shown a list of words and then asked to recall words from it, after which they were told to type words that were randomly selected from the same list. Spookily, the students were better at recalling words that they would later type.”

Therefore, if students could recall words better before the causative event even happened, then that seems to imply that they are not really in control of their choices, and hence have no free will.

However, our old friend Programmed Reality, again comes to the rescue and offers not one, not two, but three different explanations for these results.  Imagine that our reality is generated by a computational mechanism, as shown in the figure below.

programmedreality

Part of what constitutes our reality would also be our bodies and our brain stuff – neurons, etc.  In addition, assume that that “Computer” reads our consciousness as its input and makes decisions based both on the current state of reality, as well as the state of our consciousnesses.  In such case, consider these three possible explanations:

1. Evidence is rewritten after the fact.  In other words, after the students are told the words to type, the Program goes back and rewrites all records of the student’s guesses, so as to create the precognitive anomaly.  Those records consist of the students and the experimenters memories, as well as any written or recorded artifacts.  Since the Program is in control of all of these items, the complete record of the past can be changed, and no one would ever know.

2. The Program selects the randomly typed words to match the results, so as to generate the precognitive anomaly.

3. We live in an Observer-created reality and the entire sequence of events is either planned out or influenced by intent, and then just played out by the experimenter and students.

Mystery solved, Programmed Reality style.

 

billmurray185

There is no “Now.” But there will be.

One of our long time Forum Members posted an excellent question: “Is there really a ‘now'”?  The mystics tell us that there is only NOW.  But I suspect they are referring to a state of reality or a state of consciousness that one only reaches when they die or if they sit on top of a mountain contemplating their naval for a dozen or so years and get really lucky.

Back in the reality that we all know and love, I got to thinking about the reality that we all know and love.  And came to the conclusion that there is no NOW.  Here’s why:

Our interpretation of the present is really based on our short term memory, which lasts some 30 seconds or so. If we had no short term memory, we would not be able to think, plan, procreate, remember to eat, etc. In short, we would perish.

However, what is in short term memory is not NOW, it is the past. Now can only be defined as an instant. Or, in mathematical terms, it is t=0, or the limit as “delta t” approaches zero at t=0. As an absolute, or an infinite concept, it could only exist in an infinite universe, which also must be continuous. As I “tend” to believe that our universe is not infinite and is bound by the attributes of the Program (see “The Universe – Solved!”), the smallest unit of time around the concept of NOW would be a clock cycle of the Program. If it is the Planck time, then it is 10E-43 seconds (although it could be other resolutions). In any case, it has a duration, so it can’t be instantaneous or absolute. Therefore, there is no NOW, only our PERCEPTION of now, which is our very short term memory.

That said, in the other realm, where consciousness “probably” goes after death, everything is NOW, as the mystics say. That is because there is no physical stuff, no brain, no short term memory, and therefore no need for time as a dimension. Hence, everything could only be NOW.

If so, no need to even fear the “five-point-palm-exploding-heart technique.”

kill-bill-guy185

WikiLeaks, Denial of Service Attacks, and Nanobot Clouds

The recent firestorm surrounding WikiLeaks reminds me of one of Neal Stephenson’s visions of the future, “Diamond Age,” written back in 1995.  The web was only at its infancy, but Stephenson had already envisioned massive clouds of networked nanobots, some under control of the government, some under control of other entities.  Such nanobot swarms, also known as Utility Fogs, could be made to do pretty much anything; form a sphere of protection, gather information, inspect people and report back to a central server, or be commanded to attack each other.  One swarm under control of one organization may be at war with another swarm under the control of another organization.  That is our future.  Nanoterrorism.

A distributed denial of service attack (DDoS) is a network attack on a particular server or internet node.  It is often carried out by having thousands of computers saturate the target machine with packet requests, making it impossible for the machine to respond to normal HTTP requests, effectively bringing it to its knees, inaccessible on the internet.  The attacks are often coordinated by a central source who takes advantage of networks of already compromised computers (aka zombie computers, usually unknown to their owners) via malware inflections.  On command, these botnets initiate their attack with clever techniques called Smurf attacks, Ping floods, SYN floods, and other scary sounding events.  An entire underground industry has built up around botnets, some of which can number in the millions.  Botnets can be leased by anyone who knows how to access them and has a few hundred dollars.  As a result, an indignant group can launch an attack on, say, the WikiLeaks site.  And, in response, a WikiLeak support group can launch a counter attack on its enemies, like MasterCard, Visa, and PayPal for their plans to terminate service for WikiLeaks.  That is our present.  Cyberterrorism.

Doesn’t it sound a lot like the nanoterrorism envisioned by Stephenson?  Except it is still grounded in the hardware.  As I see it, the equation of the future is:

Nanoterrorism = Cyberterrorism + Microrobotics + Moore’s Law + 20 years.

Can’t wait!

ddos2 nanobots2_185