Materialism BS


I have never before used my blog to rant about someone else’s writing. But I came across a rather humorous attempt at scientific reporting that is unfortunately all too common in its tone, inaccuracies, and presumptive style and I just can’t resist.

The article appeared in Gizmodo’s supposedly edgy spinoff blog SPLOID and purports to reveal an amazing new discovery that for the first time explains scientifically how out of body experiences (OBEs) are produced by the brain.

Here is a partial list of logical flaws in this report:

1. “This is the very first time that this type of experience has been analyzed and documented scientifically” – Researcher Celia Green must be having a good chuckle at this considering that she analyzed and documented hundreds of OBE accounts over 45 years ago.

2. “this may be the first documented case of someone who can get into this state at will” Robert Monroe must be guffawing from one of the remote rings, given that he and William Buhlman each had hundreds of experiences and were able to predictably initiate OBEs decades ago.

3. “This is not an astral trip, like those described by mystics. There’s no paranormal activity of any kind.” – This is where the article really crosses over into fiction.  Really?  No paranormal activity of any kind?  You’re sure about that?  Let’s consider an analogy.  The argument that the author gives for this claim is that since the fMRI (functional Magnetic Resonance Imaging) showed brain activity in regions “associated with kinesthetic imagery” that the experience must come from the brain.  First of all, “associated with” is hardly the kind of phrase that would warrant a definitive conclusion.  Second, science is not about definitive conclusions.  Science is about evidence and theories, not conclusions, facts or proofs.  The most definitive thing the science can provide is falsifiability when an observation negates a particular hypothesis.  However, in this case, it is the opposite – the University of Ottawa study is simply generating evidence that one person’s OBE correlates to some activity in a particular region of the brain – certainly not the stuff of facts, proofs, or even much of a theory.  The referenced paper is appropriately restrained in its conclusions, unlike the Gizmodo article, which takes silly leaps of logic.  So anyway, back to that analogy.  Let’s say that we break open my cell phone and attach some test equipment – an oscilloscope or logic analyzer – to some contact point in the circuitry.  My friend sends me a text message and, lo and behold, the test equipment activates.  Oooh, that must mean that the text was initiated from that part of the cell phone circuitry, rather than from the mind of my BFF.  NOT!

4. “The fact is…scientists believe that these out-of-body experiences are a type of hallucination triggered by some neurological mechanism.”  Sorry, Jordan, not clear where you get this “fact.”  You have made a sweeping generalization of the beliefs of all scientists.  Have you checked with all of the scientists?  Or did you mean to say “some scientists?”  Because most scientists with open minds would argue to the contrary.



Nature, Nurture, Neither?

Most of us are aware of (or may be part of) a family where siblings are radically different from each other, their personalities, interests, and value systems sometimes seeming to be completely opposite.  It is difficult to chalk this up to either nature or nurture because both parties couldn’t have a more common nature environment, or common nurture environment.  Having been raised in the same household for their entire lives, and being biologically from the same sets of DNA, what could possibly cause such stark differences?

Psychologists and biologists have attempted to tease out the influential factors by studying criminal records, IQ, personality traits, and sexual preferences of identical twins raised together, identical twins raised apart, adoptive siblings raised together, fraternal twins, siblings, and random pairs of strangers.  Criminality appears to have influences from both nature and nurture, while IQ seems largely hereditary.  Some studies support the conclusion that personality traits are mostly hereditary, while others lean toward environmentSexual preference appears to be unrelated to DNA, yet is also hard to explain by environment alone, given the results of identical twin studies.  However, even in studies of these traits, where correlations are observed, the correlations tend to be small, leaving a large portion of the reason for such traits up in the air.

Mathematically, nature plus nurture doesn’t appear to explain why we are the way we are.  However, if instead we adopted the well-supported and researched view that we are not our bodies, then our consciousness exists independent of our bodies.  As such, it is reasonable to expect that that consciousness learns, adapts, and evolves across multiple lifetimes, and perhaps non-physical experiences.  And this would certainly provide an excellent explanation for the anomalies listed above.  It would make sense, for example, that IQ, perhaps being related to the function of the brain, be largely influenced by genetics.  However, it would not make sense for value systems to be genetic, and the influence from family environment would only extend back to childhood; hence personality traits should show some nurture correlation from “this life”, with the majority of the influence being from past lives (and therefore, a mystery to those who don’t understand or accept this paradigm).

Research supports this view and notes that ‘many aspects of the child’s present personality have carried forward intact from the past life: behavior, emotions, phobias, talents, knowledge, the quality of relationships, and even physical symptoms.’  Sadly, such research is as heretical to scientific orthodoxy as heliocentrism was 500 years ago, although referring to it as epigenetics may be a safe way for scientists to dip their toes into the water without getting scalded.

So, the next time someone gets on your case for not living up to the family ideal, smile knowingly, and be proud of your karmic heritage.


Quantum Zeno Effect Solved

Lurking amidst the mass chaos of information that exists in our reality is a little gem of a concept called the Quantum Zeno Effect.  It is partially named after ancient Greek philosopher Zeno of Elea, who dreamed up a number of paradoxes about the fluidity of motion and change.  For example, the “Arrow Paradox” explores the idea that if you break down time into “instants” of zero duration, motion cannot be observed.  Thus, since time is composed of a set of instants, motion doesn’t truly exist.  We might consider Zeno to have been far ahead of his time as he appeared to be thinking about discrete systems and challenging the continuity of space and time a couple thousand years before Alan Turing resurrected the idea in relation to quantum mechanics: “It is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, one second, tends to one as N tends to infinity; that is, that continual observations will prevent motion …”.  The term “Quantum Zeno Effect” was first used by physicists George Sudarshan and Baidyanath Misra in 1977 to describe just such a system – one that does not change state because it is continuously observed.

The challenge with this theory has been in devising experiments that can verify or falsify it.  However, technology has caught up to philosophy and, over the last 25 years, a number of experiments have been performed which seem to validate the effect.  In 2001, for example, physicist Mark Raizen and a team at the University of Texas showed that the effect is indeed real and the transition of states in a system can be either slowed down or sped up simply by taking measurements of the system.

I have enjoyed making a hobby of fully explaining quantum mechanics anomalies with the programmed reality theory.   Admittedly, I don’t always fully grasp some of the deep complexities and nuances of the issues that I am tackling, due partly to the fact that I have a full time job that has naught to do with this stuff, and partly to the fact that my math skills are a bit rusty, but thus far, it doesn’t seem to make a difference.  The more I dig in to each issue, the more I find things that simply support the idea that we live in a digital (and programmed) reality.

The quantum Zeno effect might not be observed in every case.  It only works for non-memoryless processes.  Exponential decay, for instance, is an example of a memoryless system.  Frequent observation of a particle undergoing radioactive decay would not affect the result.  [As an aside, I find it very interesting that a “memoryless system” invokes the idea of a programmatic construct.  Perhaps with good reason…]

A system with memory, or “state”, however, is, in theory, subject to the quantum Zeno effect.  It will manifest itself by appearing to reset the experiment clock every time an observation is made of the state of the system.  The system under test will have a characteristic set of changes that vary over time.  In the case of the University of Texas experiment, trapped ions tended to remain in their initial state for a brief interval or so before beginning to change state via quantum tunneling, according to some probability function.  For the sake of developing a clear illustration, let’s imagine a process whereby a particle remains in its initial quantum state (let’s call it State A) for 2 seconds before probabilistically decaying to its final state (B) according to a linear function over the next second.  Figure A shows the probability of finding the particle in State A as a function of time.  For the first 2 seconds, of course, it has a 0% probability of changing state, and between 2 and 3 seconds it has an equal probability of moving to state B at any point in time.  A system with this behavior, left on its own and measured at any point after 3 seconds, will be in State B.


What happens, however, when you make a measurement of that system, to check and see if it changed state, at t=1 second?  Per the quantum Zeno effect, the experiment clock will effectively be reset and now the system will stay in State A from t=1 to t=3 and then move to state B at some point between t=3 and t=4.  If you make another measurement of the system at t=1, the clock will again reset, delaying the behavior by another second.  In fact, if you continue to measure the state of the system every second, it will never change state.  Note that this has absolutely nothing to do with the physical impact of the measurement itself; a 100% non-intrusive observation will have exactly the same result.

Also note that, it isn’t that the clock doesn’t reset for a memoryless system, but rather, that it doesn’t matter because you cannot observe any difference.  One may argue that if you make observations at the Planck frequency (one per jiffy), even a memoryless sytem might never change state.  This actually approaches the true nature of Zeno’s arguments, but that is a topic for another essay, one that is much more philosophical than falsifiable.  In fact, “Quantum Zeno Effect” is a misnomer.  The non-memoryless system described above really has little to do with the ad infinitum inspection of Zeno’s paradoxes, but we are stuck with the name.  And I digress.

So why would this happen?

It appears to be related in some way to the observer effect and to entanglement:

  • Observer Effect – Once observed, the state of a system changes.
  • Entanglement – Once observed, the states of multiple particles (or, rather, the state of a system of multiple particles) are forever connected.
  • Quantum Zeno – Once observed, the state of a system is reset.

What is common to all three of these apparent quantum anomalies is the coupling of the act of observation with the concept of a state.  For the purposes of this discussion, it will be useful to invoke the computational concept of a finite state machine, which is a system that changes state according to a set of logic rules and some input criteria.

I have explained the Observer effect and Entanglement as logical necessities of an efficient programmed reality system.  What about Quantum Zeno?  Why would it not be just as efficient to start the clock on a process and let it run, independent of observation?

A clue to the answer is that the act of observation appears to create something.

In the Observer effect, it creates the collapse of the probability wave functions and the establishment of definitive properties of certain aspects of the system under observation (e.g. position).  This is not so much a matter of efficiency as it is of necessity, because without probability, free will doesn’t exist and without free will, we can’t learn, and if the purpose of our system is to grow and evolve, then by necessity, observation must collapse probability.

In Entanglement, the act of observation may create the initiation of a state machine, which subsequently determines the behavior of the particles under test.  Those particles are just data, as I have shown, and the data elements are part of the same variable space of the state machine.  They both get updated simultaneously, regardless of the “virtual” distance between them.

So, in Quantum Zeno, the system under test is in probability space.  The act of observation “collapses” this initial probability function and kicks off the mathematical process by which futures states are determined based on the programmed probability function.  But that is now a second level of probability function; call it probability function 2.  Observing this system a second time now must collapse the probability wave function 2.  But to do so means that the system would now have to calculate a modified probability function 3 going forward – one that takes into account the fact that some aspect of the state machine has already been determined (e.g. the system has or hasn’t started its decay).  For non-memoryless systems, this could be an arbitrarily complex function (3) since it may take a different shape for every time at which the observation occurs.  A third measurement complicates the function even further because even more states are ruled out.

On the other hand, it would be more efficient to simply reset the probability function each time an observation is made, due to the efficiency of the reality system.

The only drawback to this algorithm is the fact that smart scientists are starting to notice these little anomalies, although the assumption here is that the reality system “cares.”  It may not.  Or perhaps that is why most natural processes are exponential, or memoryless – it is a further efficiency of the system.  Man-made experiments, however, don’t follow the natural process and may be designed to be arbitrarily complex, which ironically serves to give us this tiny little glimpse into the true nature of reality.

What we are doing here is inferring deep truths about our reality that are in fundamental conflict with the standard materialist view.  This will be happening more and more as time goes forward and physicists and philosophers will soon have no choice but to consider programmed reality as their ToE.


RIP Kardashev Civilization Scale

In 1964, Soviet astronomer Nikolai Kardashev proposed a model for categorizing technological civilizations.  He identified 4 levels or “Types”, simplified as follows:

Type 0 – Civilization that has not yet learned to utilize the full set of resources available to them on their home planet (e.g. oceans, tidal forces, geothermal forces, solar energy impinging upon the planet, etc.)

Type 1 – Civilization that fully harnesses, controls, and utilizes the resources of their planet.

Type 2 – Civilization that fully harnesses, controls, and utilizes the resources of their star system.

Type 3 – Civilization that fully harnesses, controls, and utilizes the resources of their galaxy.


As with philosophical thought, literature, art, music, and other concepts and artifacts generated by humanity, technological and scientific pursuits reflect the culture of the time.  In 1964, we were on the brink of nuclear war.  The space race was in full swing and the TV show “Star Trek” was triggering the imagination of laymen and scientists alike.  We thought in terms of conquering people and ideas, and in terms of controlling resources.  What countries are in the Soviet bloc?  What countries are under US influence?  Who has access to most of the oil?  Who has the most gold, the most uranium?

The idea of dominating the world was evident in our news and our entertainment.  Games like Risk and Monopoly were unapologetically imperialistic.  Every Bond plot was about world domination.

Today, many of us find these ideas offensive.  To start with, imperialism is an outdated concept founded on the assumption of superiority of some cultures over others.  The idea of harnessing all planetary resources is an extension of imperialistic mentality, one that adds all other life forms to the entities that we need to dominate.  Controlling planetary resources for the sake of humanity is tantamount to stealing those same resources from other species that may need them.  Further, our attempt to control resources and technology can lead to some catastrophic outcomes.  Nuclear Armageddon, grey goo, overpopulation, global warming, planetary pollution, and (human-caused) mass extinctions are all examples of potentially disastrous consequences of attempts to dominate nature or technology without fully understanding what we are doing.

I argue in “Alien Hunters Still Thinking Inside The Box (or Dyson Sphere)” that attempting to fully harness all of the energy from the sun is increasingly unnecessary and unlikely to our evolution as a species.  Necessary energy consumption per capita is flattening for developing cultures and declining for mature ones.  Technological advances allow us to get much more useful output from our devices as time goes forward.  And humanity is beginning to de-emphasize raw size and power as a desirable attribute (for example, see right sizing economic initiatives) and instead focus on the value of consciousness.

So, certainly the hallmarks of advanced civilizations are not going to be anachronistic metrics of how much energy they can harness.  So what metrics might be useful?

How about:  Have they gotten off their planet?  Have they gotten out of their solar system?  Have they gotten out of their galaxy?

Somehow, I feel that even this is misleading.  Entanglement shows that everything is interconnected.  The observer effect demonstrates that consciousness transcends matter.  So perhaps the truly advanced civilizations have learned that they do not need to physically travel, but rather mentally travel.

How about: How little of an impact footprint do they leave on their planet?

The assumption here is that advanced civilizations follow a curve like the one below, whereby early in their journey they have a tendency to want to consume resources, but eventually evolve to have less and less of a need to consume or use energy.


How about: What percentage of their effort is expended upon advancing the individual versus the society, the planetary system, or the galactic system?


How about: Who cares?  Why do we need to assign a level to a civilization anyway?  Is there some value to having a master list of evolutionary stage of advanced life forms?  So that we know who to keep an eye on?  That sounds very imperialistic to me.

Of course, I am as guilty of musing about the idea of measuring the level of evolution of a species through a 2013 cultural lens as Kardashev was doing so through a 1964 cultural lens.  But still, it is 50 years hence and time to either revise or retire an old idea.

Flexi Matter

Earlier this year, a team of scientists at the Max Planck Institute of Quantum Optics, led by Randolf Pohl, made a highly accurate calculation of the diameter of a proton and, at .841 fm, it turned out to be 4% less than previously determined (.877 fm).  Trouble is, the previous measurements were also highly accurate.  The significant difference between the two types of measurement was the choice of interaction particle: in the traditional case, electrons, and in Pohl’s case, muons.

Figures have been checked and rechecked and both types of measurements are solid.  All sorts of crazy explanations have been offered up for the discrepancy, but one thing seems certain: we they don’t really understand matter.

Ancient Greeks thought that atoms were indivisible (hence, the name), at least until Rutherford showed otherwise in the early 1900s.  Ancient 20th-century scientists thought that protons were indivisible, at least until Gell-Mann showed otherwise in the 1960s.

So why would it be such a surprise that the diameter of a proton varies with the type of lepton cloud that surrounds and passes through it?  Maybe the proton is flexible, like a sponge, and a muon, at 200 times the weight of an electron, exerts a much higher contractive force on it – gravity, strong nuclear, Jedi, or what have you.  Just make the measurements and modify your theory, guys.  You’ll be .000001% closer to the truth, enough to warrant an even bigger publicly funded particle accelerator.

If particle sizes and masses aren’t invariant, who is to say that they don’t change over time.  Cosmologist Christof Wetterich of the University of Heidelberg thinks this might be possible.  In fact, says Wetterich, if particles are slowly increasing in size, the universe may not be expanding after all.  His recent paper suggests that spectral red shift, Hubble’s famous discovery at Mount Wilson, that led the most widely accepted theory of the universe – the big bang, may actually be due to changing particle sizes over time.  So far, no one has been able to shoot a hole in his theory.

Oops.  “Remember what we said about the big bang being a FACT?  Never mind.”

Flexi-particles.  Now there is both evidence and major philosophical repercussions.

And still, The Universe – Solved! predicts there is no stuff.

The ultimate in flexibility is pure data.


Ever Expanding Horizons

Tribal Era

tribalera200Imagine the human world tens of thousands of years ago.  A tribal community lived together, farming, hunting, trading, and taking care of each other.  There was plenty of land to support the community and as long as there were no strong forces driving them to move, they stayed where they were, content.  As far as they knew, “all that there is” was just that community and the land that was required to sustain it.  We might call this the Tribal Era.

Continental Era

continentalera200But, at some point, for whatever reason – drought, restlessness, desire for a change of scenery – another tribe moved into the first tribe’s territory.  For the first time, that tribe realized that the world was bigger than their little community.  In fact, upon a little further exploration, they realized that the boundaries of “all that there is” just expanded to the continent on which they lived, and there was a plethora of tribes in this new greater community.  The horizon of their reality just reached a new boundary and their community was now a thousand fold larger than before.

Planetary Era

planetaryera200According to researchers, the first evidence of cross-oceanic exploration was about 9000 years ago.  Now, suddenly, this human community may have been subject to an invasion of an entirely different race of people with different languages coming from a place that was previously thought to not exist.  Again, the horizon expands and “all that there is” reaches a new level, one that consists of the entire planet.

Solar Era

The Ancient Greek philosophers and astronomers recognized the existence of other solarera200planets.  Gods were thought to have come from the sun or elsewhere in the heavens, which consisted of a celestial sphere that wasn’t too far out away from the surface of our planet.

Imaginations ran wild as horizons expanded once again.

Galactic Era

galacticera200In 1610, Galileo looked through his telescope and suddenly humanity’s horizon expanded by another level.  Not only did the other planets resemble ours, but it was clear that the sun was the center of the known universe, stars were extremely far away, there were strange distant nebulae that were more than nearby clouds of debris, and the Milky Way consisted of distant stars.  In other worlds, “all that there is” became our galaxy.

Universal Era

universalera200A few centuries later, in 1922, it was time to expand our reality horizon once again, as the 100-inch telescope at Mount Wilson revealed that some of those fuzzy nebulae were actually other galaxies.  The concept of deep space and “Universe” was born and new measurement techniques courtesy of Edwin Hubble showed that “all that there is” was actually billions of times more than previously thought.

Multiversal Era

multiversalera200These expansions of “all that there is” are happening so rapidly now that we are still debating the details about one worldview, while exploring the next, and being introduced to yet another.  Throughout the latter half of the 20th century, a variety of ideas were put forth that expanded our reality horizon to the concept of many (some said infinite) parallel universes.  The standard inflationary big bang theory allowed for multiple Hubble volumes of universes that are theoretically within our same physical space, but unobservable due to the limitations of the speed of light.  Bubble universes, MWI, and many other theories exist but lack any evidence.  In 2003, Max Tegmark framed all of these nicely in his concept of 4 levels of Multiverse.

I sense one of those feelings of acceleration with the respect to the entire concept of expanding horizons, as if our understanding of “all that there is” is growing exponentially.  I was curious to see how exponential it actually was, so I took the liberty of plotting each discrete step in our evolution of awareness of “all that there is” on a logarithmic plot and guess what?

Almost perfectly exponential! (see below)


Dramatically, the trend points to a new expansion of our horizons in the past 10 years or so.  Could there really be a something beyond a multiverse of infinitely parallel universes?  And has such a concept recently been put forth?

Indeed there is and it has.  And, strangely, it isn’t even something new.  For millennia, the spiritual side of humanity has explored non-physical realities; Shamanism, Heaven, Nirvana, Mystical Experiences, Astral Travel.  Our Western scientific mentality that “nothing can exist that cannot be consistently and reliably reproduced in a lab” has prevented many of us from accepting these notions.  However, there is a new school of thought that is based on logic, scientific studies, and real data (if your mind is open), as well as personal knowledge and experience.  Call it digital physics (Fredkin), digital philosophy, simulation theory (Bostrom), programmed reality (yours truly), or My Big TOE (Campbell).  Tom Campbell and others have taken the step of incorporating into this philosophy the idea of non-material realms.  Which is, in fact, a new expansion of “all that there is.”  While I don’t particularly like the term “dimensional”, I’m not sure that we have a better descriptor.

Interdimensional Era

interdiensionalera200Or maybe we should just call it “All That There Is.”

At least until a few years from now.

Grand Unified Humanity Theory

OK, maybe this post is going to be a little silly – apologies in advance.  I’m in that kind of mood.

Physicists recently created a fascinating concoction – a Bose-Einstein condensate (BEC) that was stable at a temperature 50% higher than critical.  Check out this article with the deets.  In this bizarre state of matter, all particles act in unison, entangled, as if they were collectively a single particle.  Back in Einstein’s day, BECs were envisioned to be composed of bosons.  Later, theory predicted and experiments demonstrated fermions, and ultimately, atoms.

bose185A comparison is made to an analogous process of getting highly purified water to exist at temperatures above boiling point.  It seems that phase transitions of various types can be pushed beyond their normal critical point if the underlying material is “special” in some way – pure, balanced, coherent.

Superfluids.  Laser light.

It reminds me of the continuous advances in achieving superlative or “perfect” conditions, like superconductivity (zero resistance) at temperatures closer and closer to room.  I then think of a characteristic that new agers ascribe to physical matter – “vibrational levels.”

Always connecting dots, sometimes finding connections that shouldn’t exist.

Given the trend of raising purity, alignment, and coherence in conditions closer and closer to “normal” transitions and scales, might we someday see entangled complex molecules, like proteins?  BECs of DNA strands?

Why stop there?  Could I eventually be my own BEC?  A completely coherent vibrationally-aligned entity?  Cool.  I’ll bet I would be transparent and could walk through doors.

And what if science could figure out how to create a BEC out of all living things?  Nirvana.  Reconnecting with the cosmic consciousness.

Grand Unified Humanity Theory.