New Hints to How our Reality is Created

There is something fascinating going on in the world, hidden deep beneath the noise of Trump, soccer matches, and Game of Thrones. It is an exploration into the nature of reality – what is making the world tick?

To cut to the chase, it appears that our reality is being dynamically generated based on an ultra-sophisticated algorithm that takes into account not just the usual cause/effect context (as materialists believe), and conscious observation and intent (as idealists believe), but also a complex array of reality configuration probabilities so as to be optimally efficient.

Wait, what?

This philosophical journey has its origins in the well-known double slit experiment, originally done by Thomas Young in 1801 to determine that light had wavelike properties. In 1961, the experiment was performed with electrons, which also showed wavelike properties. The experimental setup involved shooting electrons through a screen containing two thin vertical slits. The wave nature of the particles was manifested in the form of an interference pattern on a screen that was placed on the other side of the double slit screen. It was a curious result but confirmed quantum theory. In 1974, the experiment was performed one electron at a time, with the same resulting interference pattern, which showed that it was not the electrons that interfered with each other, but rather a probabilistic spatial distribution function that was followed by the pattern on the screen. Quantum theory predicted that if a detector was placed at each of the slits so as to determine which slit each electron would go through, the interference pattern would disappear and just leave two vertical lines, due to the quantum complementarity principle. This was difficult to create in the lab, but experiments in the 1980s confirmed expectations – that the “which way did the particle go” measurement killed the interference pattern. The mystery was that the mere act of observation seemed to change the results of the experiment.

So, at this point, people who were interested in how the universe works effectively split into two camps, representing two fundamental philosophies that set the foundation for thinking, analysis, hypothesis, and theorizing:

  1. Objective Materialism
  2. Subjective Idealism

A zillion web pages can be found for each category.

The problem is that most scientists, and probably at least 99% of all outspoken science trolls believe in Materialism.  And “believe” is the operative word.  Because there is ZERO proof that Materialism is correct.  Nor is there proof that Idealism is correct.  So, “believe” is all that can be done.  Although, as the massive amount of evidence leans in favor of Idealism, it is fair to say that those believers at least have the scientific method behind them, whereas materialists just have “well gosh, it sure seems like we live in a deterministic world.” What is interesting is that Materialism can be falsified, but I’m not sure that Idealism can be.  The Materialist camp had plenty of theories to explain the paradox of the double slit experiments – alternative interpretations of quantum mechanics, local hidden variables, non-local hidden variables, a variety of loopholes, or simply the notion that the detector took energy from the particles and impacted the results of the experiment (as has been said, when you put a thermometer in a glass of water, you aren’t measuring the temperature of the water, you are measuring the temperature of the water with a thermometer in it.)

Over the years, the double-slit experiment has been progressively refined to the point where most of the materialistic arguments have been eliminated. For example, there is now the delayed choice quantum eraser experiment, which puts the “which way” detectors after the interference screen, making it impossible for the detector to physically interfere with the outcome of the experiment. And, one by one, all of the hidden variable possibilities and loopholes have been disproven. In 2015, several experiments were performed independently that closed all loopholes simultaneously with both photons and electrons. Since all of these various experimental tests over the years have shown that objective realism is false and non-local given the experimenters choices, the only other explanation could be what John Bell called Super-determinism, a universe completely devoid of free will, running like clockwork playing out a fully predetermined script of events. If true, this would bring about the extremely odd result that the universe is set up to ensure that the outcomes of these experiments imply the opposite to how the universe really works. But I digress…

The net result is that Materialism-based theories on reality are being chipped away experiment by experiment.  Those that believe in Materialist dogma are finding themselves being painted into an ever-shrinking philosophical corner. But Idealism-based theories are huge with possibilities, very few of which have been falsified experimentally.

Physicist and fellow digital philosopher, Tom Campbell, has boldly suggested a number of double slit experiments that can probe the nature of reality a little deeper. Tom, like me, believes that consciousness plays a key role in the nature of and creation of our reality. So much so that he believes that the outcome of the double slit experiments is due strictly to the conscious observation of the which-way detector data. In other words, if no human (or “sufficiently conscious” entity) observes the data, the interference pattern should remain. Theoretically, one could save the data to a file, store the file on a disk, hide the disk in a box and the interference pattern would remain on the screen. Open the box a day later and the interference pattern should automatically disappear, effectively rewriting history with the knowledge of the paths of the particles. His ideas have incurred the wrath of the physics trolls, who are quick to point out that regardless of the fact that humans ever read the data, the interference pattern is gone if the detectors record the data. The data can be destroyed, or not even written to a permanent medium, and the interference pattern would be gone. If these claims are true, it does not prove Materialism at all. But it does infer something very interesting.

From this and many many other categories of evidence it is strongly likely that our reality is dynamically being generated. Quantum entanglement, quantum zeno effect, and the observer effect all look very much like artifacts of an efficient system that dynamically creates reality as needed. It is the “as needed” part of this assertion that is most interesting. I shall refer to that which creates reality as “the system.”

Entanglement happens because when a two-particle-generating event occurs, it is efficient to create two particles using the same instance of a finite state machine and, therefore, when it is needed to determine the properties of one, the properties of the other are automatically known, as detailed in my blog post on entanglement. The quantum zeno effect happens because it is more efficient to reset the probability function each time an observation is made, as detailed in my blog post on quantum zeno. And so what about the double slit mystery? To illuminate, see the diagram below.

If the physicists are right, reality comes into existence at point 4 in the diagram. Why would that be? The paths of the particles are apparently not needed for the experience of the conscious observer, but rather to satisfy the consistency of the experiment. The fact that the detector registers the data is enough to create the reality. Perhaps the system “realizes” that it is less efficient to leave hanging experiments all over the place until a human “opens the envelope” than it is to instantiate real electron paths despite the unlikely possibility of data deletion. Makes logical sense to me. But it also indicates a sophisticated awareness of all of the probabilities of how the reality can play out out vis a vis potential human interactions.

The system is really smart.

Comments on the Possibilist Transactional Interpretation of Quantum Mechanics, aka Models vs. Reality

Reality is what it is. Everything else is just a model.

From Plato to Einstein to random humans like myself, we are all trying to figure out what makes this world tick. Sometimes I think I get it pretty well, but I know that I am still a product of my times, and therefore my view of reality is seen through the lens of today’s technology and state of scientific advancement. As such, I would be a fool to think that I have it all figured out. As should everyone else.

At one point in our recent past, human scientific endeavor wasn’t so humble. Just a couple hundred years ago, we thought that atoms were the ultimate building blocks of reality and everything could be ultimately described by equations of mechanics. How naïve that was, as 20th century physics made abundantly clear. But even then, the atom-centric view of physics was not reality. It was simply a model. So is every single theory and equation that we use today, regardless of whether it is called a theory or a law: Relativistic motion, Schrodinger’s equation, String Theory, the 2nd Law of Thermodynamics – all models of some aspect of reality.

We seek to understand our world and derive experiments that push forward that knowledge. As a result of the experiments, we define models to best fit the data.

One of the latest comes from quantum physicist Ruth Kastner in the form of a model that better explains the anomalies of quantum mechanics. She calls the model the Possibilist Transactional Interpretation of Quantum Mechanics (PTI), an updated version of John Cramer’s Transactional Interpretation of Quantum Mechanics (TIQM, or TI for short) proposed in 1986. The transactional nature of the theory comes from the idea that the wavefunction collapse behaves like a transaction in that there is an “offer” from an “emitter” and a “confirmation” from an “absorber.” In the PTI enhancement, the offers and confirmations are considered to be outside of normal spacetime and therefore the wavefunction collapse creates spacetime rather than occurs within it. Apparently, this helps to explain some existing anomalies, like uncertainty and entanglement.

This is all cool and seems to serve to enhance our understanding of how QM works. However, it is STILL just a model, and a fairly high level one at that. And all models are approximations, approximating a description of reality that most closely matches experimental evidence.

Underneath all models exist deeper models (e.g. string theory), many as yet to be supported by real evidence. Underneath those models may exist even deeper models. Consider this layering…

Screen Shot 2015-09-29 at 8.18.55 PM

Every layer contains models that may be considered to be progressively closer to reality. Each layer can explain the layer above it. But it isn’t until you get to the bottom layer that you can say you’ve hit reality. I’ve identified that layer as “digital consciousness”, the working title for my next book. It may also turn out to be a model, but it feels like it is distinctly different from the other layers in that, by itself, it is no longer an approximation of reality, but rather a complete and comprehensive yet elegantly simple framework that can be used to describe every single aspect of reality.

For example, in Digital Consciousness, everything is information. The “offer” is then “the need to collapse the wave function based on the logic that there is now an existing conscious observer who depends on it.” The “confirmation” is the collapse – the decision made from probability space that defines positions, spins, etc. This could also be seen as the next state of the state machine that defines such behavior. The emitter and absorber are both parts of the “system”, the global consciousness that is “all that there is.” So, if experimental evidence ultimately demonstrates that PTI is a more accurate interpretation of QM, it will nonetheless still be a model and an approximation. The bottom layer is where the truth is.

Elvidge’s Postulate of Countable Interpretations of QM…

The number of intepretations of Quantum Mechanics always exceeds the number of physicists.

Let’s count the various “interpretations” of quantum mechanics:

  • Bohm (aka Causal, or Pilot-wave)
  • Copenhagen
  • Cosmological
  • Ensemble
  • Ghirardi-Rimini-Weber
  • Hidden measurements
  • Many-minds
  • Many-worlds (aka Everett)
  • Penrose
  • Possibilist Transactional (PTI)
  • Relational (RQM)
  • Stochastic
  • Transactional (TIQM)
  • Von Neumann-Wigner
  • Digital Consciousness (DCI, aka Elvidge)

Unfortunately you won’t find the last one in Wikipedia. Give it about 30 years.

istock_000055801128_small-7dde4d0d485dcfc0d5fe4ab9e600bfef080121d0-s800-c85

Which came first, the digital chicken, or the digital philosophy egg?

Many scientists, mathematicians, futurists, and philosophers are embracing the idea that our reality is digital these days. In fact, it would be perfectly understandable to wonder if digital philosophy itself is tainted due to the tendency of humans to view ideas through the lens of their times. We live in a digital age, surrounded by computers, the Internet, and smart phones, and so might we not be guilty of imagining that the world behaves just as a multi-player video game does? We probably wouldn’t have had such ideas 50 years ago, when, at a macroscopic level at least, everything with which we interacted appeared analog and continuous. Which came first, the digital chicken, or the digital philosophy egg?

Actually, the concepts of binary and digital are not at all new. The I Ching is an ancient Chinese text that dates to 1150 BCE. In it are 64 combinations of 8 trigrams (aka the Bagua), each of which clearly contain the first three bits of a binary code. 547px-Bagua-name-earlier.svg

Many other cultures, including the Mangareva in Polynesia (1450), and Indian (5th to 2nd century BCE), have used binary encodings for communication for thousands of years. Over 12,000 years ago, African tribes developed a binary divination system called Odu Ifa.

German mathematician and philosopher Gottfried Leibniz is generally credited as developing the modern binary number system in 1679, based on zeros and ones. Naturally, all of these other cultures are ignored so that we can maintain the illusion that all great philosophical and mathematical thought originated in Europe. Regardless of Eurocentric biases, it is clear that binary encoding is not a new concept. But what about applying it to the fundamental construct of reality?

It turns out that while modern digital physics or digital philosophy references are replete with sources that only date to the mid-20th century, the ancient Greeks (namely Plato) believed that reality was discrete. Atoms were considered to be discrete and fundamental components of reality.

A quick clarification of the terms “discrete”, “digital”, “binary”, “analog”, and “continuous” is probably in order:

Discrete – Having distinct points of measurement in the time domain

Digital – Having properties that can be encoded into bits

Binary – Encoding that is done with only two digits, zeros and ones

Analog – Having continuously variable properties

Continuous – The time domain is continuous

So, for example, if we encode the value of some property (e.g. length or voltage) digitally using 3 values (0, 1, 2), that would be digital, but not binary (rather, ternery). If we say that between any two points in time, there is an infinitely divisible time element, but for each point, the value of the measurement being performed on some property is represented by bits, then we would have a continuous yet digital system. Conversely, if time can be broken into chunks such that at a fine enough temporal granularity there is no concept of time between two adjacent points in time, but at each of these time points, the value of the measurement being performed is continuously variable, then we would have a discrete yet analog system.

In the realm of consciousness-driven digital philosophy, it is my contention that the evidence strongly supports reality being discrete and digital; that is, time moves on in “chunks” and at each discrete point in time, every property of everything can be perfectly represented digitally. There are no infinities.

I believe that this is a logical and fundamental conclusion, regardless of the fact that we live in a digital age. There are many reasons for this, but for the purposes of this particular blog post, I shall only concentrate on a couple. Let’s break down the possibilities of our reality, in terms of origin and behavior:

  1. Type 1 – Our reality was created by some conscious entity and has been following the original rules established by that entity. Of course, we could spend a lifetime defining “conscious” or “entity” but let’s try to keep it simple. This scenario could include traditional religious origin theories (e.g. God created the heavens and the earth). It could also include the common simulation scenarios, a la Nick Bostrom’s “Simulation Argument.”
  1. Type 2 – Our reality was originally created by some conscious entity and has been evolving according to some sort of fundamental evolutionary law ever since.
  1. Type 3 – Our reality was not created by some conscious entity, and its existence sprang out of nothing and has been following primordial rules of physics ever since. To explain the fact that our universe is incredibly finely-tuned for matter and life, materialist cosmologists dreamt up the idea that we must exist in an infinite set of parallel universes, and via the anthropic principle, the one we live only appears finely-tuned because it has to in order for us to be in it. Occam would be turning over in his grave.
  1. Type 4 – Our reality was not created by some particular conscious entity, but rather has been evolving according to some sort of fundamental evolutionary law from the very beginning.

I would argue that in the first two cases, reality would have to be digital. For, if a conscious entity is going to create a world for us to live in and experience, that conscious entity is clearly highly evolved compared to us. And, being so evolved, it would certainly make use of the most efficient means to create a reality. A continuous reality is not only inefficient, it is theoretically impossible to create because it involves infinities in the temporal domain as well as any spatial domain or property.

pixelated200I would also argue that in the fourth case, reality would have to be digital for similar reasons. Even without a conscious entity as a creator, the fundamental evolutionary law would certainly favor a perfectly functional reality that doesn’t require infinite resources.

Only in the third case above, would there be any possibility of a continuous analog reality. Even then, it is not required. As MIT cosmologist and mathematician Max Tegmark succinctly put it, “We’ve never measured anything in physics to more than about sixteen significant digits, and no experiment has been carried out whose outcome depends on the hypothesis that a true continuum exists, or hinges on nature computing something uncomputable.” Hence there is no reason to assume, a priori, that the world is continuous. In fact, the evidence points to the contrary:

  • Infinite resolution would imply that matter implodes into black holes at sub-Planck scales and we don’t observe that.
  • Infinite resolution implies that relativity and quantum mechanics can’t coexist, at least with the best physics that we have today. Our favorite contenders for rationalizing relativity and quantum mechanics are string theory and loop quantum gravity. And they only work with minimal length (aka discrete) scales.
  • We actually observe discrete behavior in quantum mechanics. For example, a particle’s spin value is always quantized; there are no intermediate states. This is anomalous in continuous space-time.

For many other reasons, as are probably clear from the evidence compiled on this site, I tend to favor reality Type 4. No other type of reality structure and origin can be shown to be anywhere near as consistent with all of the evidence (philosophical, cosmological, mathematical, metaphysical, and experimental). And it has nothing to do with MMORPGs or the smart phone in my pocket.

Objective vs. Subjective Reality

Today’s blog is one part rehash of an ancient dilemma that has puzzled and divided philosophers and scientists for millennia and two parts The Universe – Solved!

First a couple definitions…

Objective Reality – a reality that completely exists independent of any conscious entity to observe it.

Subjective Reality – what we perceive.

As it is well known, subjective reality is “subject” to an elaborate set of filters, any one of which can modify a perception of that reality; sensory apparatus (e.g. the rods and cones in our eyes), sensory processing (e.g. the visual cortex), higher level brain function, and psychological factors (e.g. expectations). As such, what one person experiences is always different than what any other person experiences, but usually in subtle ways.

Fundamentally, one cannot prove the existence of an objective reality. We can only infer its properties through observations, which of course, are subjective. However, it may be possible to prove that objective reality doesn’t exist, if, for example, it can be shown that the properties inferred via a particular observer fundamentally contradict properties inferred via another observer. But even then those inferences may be hopelessly subjective. Suppose person A sees a car as red and person B sees the same car as green. We can’t conclude that there is no objective reality because person B could simply have an unusual filter somewhere between the car and the seat of their consciousness.

What if we can use some sort of high-precision reproducible measurement apparatus to make some observations on reality and find that under certain controlled circumstances, reality changes depending on some parameter that appears to be disconnected to the reality itself? There are a lot of qualifiers and imperfections in that question – like “high (vs. infinite) precision” and “appears” – but what comes to mind is the well-known double slit experiment. In 1998, researchers at the Weizmann Institute of Science, demonstrated that reality shifts depending on the amount of observation, even if the “observer” is a completely non-intrusive device. IQOQI upped the ante in terms of precision in 2008 by showing that objective reality doesn’t exist to a certainly of 80 orders of magnitude (probability of being false due to error or chance = 1E-80). That’s good enough for me. And, in 2012, Dr. Dean Radin conducted what appear to be well-designed and rigorous scientific experiments that show to a high probability that conscious intent can directly alter the results of the double slit experiment. Just as it only takes one white crow to prove that not all crows are black, it only takes one experiment that demonstrates the non-existence of objective reality to prove that objective reality is an illusion.

So that debate is over. Let’s get past it and move onto the next interesting questions

What is this reality that we all perceive to be “almost” solid and consistent?

I believe it is a digital consciousness-influenced high-consensus reality for reasons outlined here. It has to have a high degree of consensus because, in order to learn and evolve our consciousness, we have to believe in a well-grounded cause and effect.

What does “almost” mean?

We could define “almost” as 1 minus the degree to which apparent objective reality is inconsistent, either between separate observers, or in experiments that have a different outcomes depending on the state of the observer. For now, I’ll have to punt on the estimates because I haven’t found any supporting research, but I suspect it is between 99.999% and 1.

How does “almost” work?

Subjective reality does not mean that you can call the shots and become a millionaire just due to intent. The world would be insane if that were the case. Because of the “consensus” requirement, the effects are much more subtle than that. For you to see a passing car and make it turn red just because you want to, would violate the color consensus that must be maintained for the other 1000 people that see that car drive by. In fact, there is nothing to say that the aggregate of conscious intents from all conscious entities fully shape the subjective reality. Most of it may be driven by the rules of the system (that aspect of digital global consciousness that drives the projection of the physical reality). See the figure below. In the digital global consciousness system (see my “The Universe-Solved!” or Tom Campbell’s “My Big TOE” for more in depth explanations of this view of the nature of reality), Brandon and I are just individuated segments of the greater whole. (Note: This is how we are all connected. The small cloud borders are not impervious to communication, either from other individuated consciousnesses (aka telepathy) or from the system as a whole (aka spiritual enlightenment)).

system

Brandon’s reality projection may have three components. First, it is generated by the system, based on whatever rules the system has for creating our digital reality. Second, it may be influenced by the aggregate of the intent of all conscious entities, which is also known by the system. Finally, his projection may be slightly influenced by his own consciousness. The same applies to my own projection. Hence, our realities are slightly different, but not enough to notice on a day-to-day basis. Only now that our scientific instrumentation has become sensitive enough, are we starting to be able to realize (but not yet quantify) this. Perhaps 5% of reality is shaped by the aggregate consensus and 95% by the system itself. Or 1% and 99%. Or .00001% and 99.99999%. All are possible, but none are objective.

Embracing Virtuality

In 2009, a Japanese man married a woman named Nene Anegasaki on the island of Guam.  The curious thing was that Nene was a virtual character in the Nintendo videogame LovePlus.

OurVirtualFuture1

In 2013, Spike Jonze directed the highly acclaimed (and Academy Award nominated) film “Her”, in which the protagonist falls in love with an OS (operating system) AI (artificial intelligence).

OurVirtualFuture2

Outrageous you say?

Consider that for centuries people have been falling in love sight unseen via snail mail.  Today, with online dating, this is even more prevalent.  Philosophy professor Aaron Ben-Ze’ev notes that online technology “enables having a connection that is faster and more direct.”

So it got me thinking that these types of relationships aren’t that different from the virtual ones that are depicted in “Her” and are going to occur with increasing frequency as AI progresses.  The interactions are exactly the same; it is just that the entity at the end of the communication channel is either real or artificial.

But wait, what is artificial and what is real?  As Morpheus said in “The Matrix,” “What is real? How do you define ‘real’? If you’re talking about what you can feel, what you can smell, what you can taste and see, then ‘real’ is simply electrical signals interpreted by your brain.”  This is not just philosophy; this is as factual as you can get.

As a growing number of researchers, physicists, and philosophers come to terms with the supporting evidence that we already live in a virtual reality, we realize that there is no distinction between a virtual entity that we think is virtual (such as a game character) and a virtual entity that we think is real (such as the person you are in a relationship with).  Your consciousness does not emerge from your brain; its seat is elsewhere.  Your lover’s consciousness therefore is also elsewhere.  You are interacting with it via the transfer of data and your emotions are part of your core consciousness.  Does it matter whether that data transfer is between two conscious entities outside of physical reality or between a conscious entity and another somewhat less conscious entity?

As technology progresses, AI advances, and gaming and simulations become more immersive, falling in love or having any other kind of emotional experience will be occurring more and more frequently with what we today think of as virtual entities.

Now, it seems shocking.  Tomorrow it will be curious.  Eventually it will be the norm.

The Consensus Reality Spectrum

I have recently been on a quest to learn more about the greater “landscape” of realities and have actually had some rewarding successes.  I call them all realities, because the definition of the word “real” is entirely arbitrary and subjective; hence, everything may be considered a reality.  During a recent lucid dream, I had a revelation.  In retrospect, it doesn’t seem as substantial of an idea now as it did then, but here is the gist of it:

The only significant difference between a dream state and what we think of as our “normal physical reality” is the level of consensus that is applied to it.

When we dream or fantasize, our minds are fully in control of creating the reality that we take part in.  In our physical world, however, this is clearly not the case.  We can’t just make the sky red, fly, or defy the laws of physics.  However, there is incontrovertible evidence that we can mold our reality, as demonstrated by:

And, as if to put the final nail in the materialistic determinism coffin, scientists at the prestigious IQOQI institute in Vienna, demonstrated to a certainty of 1 part in 1E80 that objective reality does not exist.

So why does physical reality seem so real?  It is because it is designed that way.  We are much more likely to learn when we believe in well-grounded cause and effect.  Seriously, when was the last time you actually consciously learned something from a dream? (Subconsciously, that is a different story.)  In order for us to get something useful out of this physical-matter-reality learning lab, we must believe it is somehow more real than what we can conjure up in our minds.  But, again, all that means is that our experience is relatively consistent with that of our free-willed friends and colleagues.  She sees a blue car, you see a blue car, you both describe it the same way, it therefore seems real and objective.  Others have referred to this as a consensus reality, a descriptor that fits well.

It is not unlike a large-scale computer game.  In a FPS (first person shooter), only you are experiencing the sim.  In an MMORPG (massively multiplayer online role playing game), everyone experiences the same sim.  However, if you think about it, there is no reason why the game can’t present different aspects of the sim to different players based on their attributes or skills.  In fact, this is exactly what some games do.

So, one can imagine a spectrum of “consensus influence”, with various realities placed somewhere on that spectrum.  At the far left, is solipsism – realities that belong to a singular conscious entity.  We may give this a consensus factor of 0, since there is none.  At the other end of the spectrum is our physical matter reality, what most of us call “the real world.”  We can’t give it a consensus factor of 100, because of the observer effect.  100 would have to be reserved for the concept of a fully deterministic reality, a concept which, like the concept of infinity, only exists in theory.  So our physical matter reality (PMR) is 99.99-something.

Everything else falls in between.

consensusrealities

Many researchers have experienced realities at various points on this spectrum.  Individual OBEs that have closely locked into PMR are at the high-consensus end of the scale.  OBEs that are more fluid are somewhere in the middle.  Mutual lucid dreaming can be considered a consensus of two and is therefore somewhere toward the low-consensus side of the spectrum.

I believe that this may be a useful model for those psychonauts, astral travelers, and quantum physicists among us.

Creating Souls is like Boiling the Ocean

Let’s say you want to boil the ocean.  Or, to make a slightly less violent example, let’s say you want to raise the temperature of the ocean by one degree (note: various sources indicate that this requires 3.4E25 joules of energy).  How might you go about doing it?

One possibility is to find a corner of the ocean and attempt to heat it through some means.  I can imagine buying a heat lamp at Home Depot, getting a really long extension cord, plugging it in, leaving it on the sand in Playa del Rey, and waiting for the ocean to warm up.

Clearly, a highly ineffective strategy.

Assuming zero radiative cooling to the atmosphere, 100% heat transfer from the heat lamp to the water (both invalid assumptions), and a convection heating process whose losses are insignificant, it would take a 1000-watt heat lamp about 78,000 times the age of the universe to accomplish that task.  The biggest part of the problem is that we are applying a relatively tiny amount of energy to the problem.

But what if we were able to distribute the energy source and hover a 1000-watt heat lamp over every square meter of ocean water?  Now the problem becomes a combination of source energy and convection process (how long it takes for the heating at the surface to make its way to the bottom of the ocean.)  In this case, we would be applying 3.6E14 times the energy, which should reduce the duration of heating to only 3 years.  However, now we are bound by the slowness of the convection process, which would take 200 million years, again assuming no radiative cooling.  Still, highly ineffective, but for a different reason.

Now, what if we were able to apply a 1000-watt heating source to every cubic meter of water in the ocean?  Disregarding convection, it would take a little over an hour to supply enough energy to raise the ocean temperature by one degree.  Convection inefficiencies could be resolved by further subdividing the ocean (e.g., have a 1 watt heating source per liter of water).

What is interesting about this is the simple observation that distributing a process recursively can be hugely more efficient than injecting energy at a single point, or even a linear distribution of function.

There are all sorts of situations for which this metaphor can be useful.  For example, let’s say you want to start a movement, like OWS.  If your method of distribution is to stand on a street corner with a megaphone, it will take a very long time for your message to reach the rest of the 300 million people in the country.  However, if you are able to recruit 1000 lieutenants, each of whom are armed with the same energy and message, and send them out to 1000 population epicenters, the movement will grow much faster; perhaps even 1000 times faster.  But that may still not be the fastest possible way to achieve the end result because each lieutenant still has to reach 300 thousand people.  But, if each of the 1000 lieutenants recruits 1000 sergeants, each sergeant only has to reach 300 people.  Any further levels of distribution would probably only result in overlaps of audiences and thus not achieve any incremental effectiveness.  I cannot think of a more efficient way to achieve the desired result than this recursive distribution process.

Let’s apply this idea to the ultimate metaphysical scenario, whereby the grand purpose behind “all that there is” is to increase the quality of the universal consciousness.  How might a universal consciousness self-organize in such a way as to optimize the rate of growth of consciousness quality?

The answer is to follow the recursive model outlined above for boiling the ocean.  Break the universal consciousness into chunks and ask each chunk to optimize its quality level through some sort of consistent organizing principle.  Each chunk can in turn break itself into even smaller chunks and make the same request, until the chunks are so small that they start to overlap their function.  Those smallest practical chunks are our individual consciousnesses.  The goal of each individual consciousness would be to raise its quality level.  How?  Perhaps via experiences obtained from this learning lab virtual reality we call “physical reality.”  Think “All You Need is Love” by The Beatles.

These ideas of individuated consciousnesses increasing their quality level, thereby contributing to the quality of the whole, are well documented by Tom Campbell (“My Big TOE”) and Steven Kaufman (“Unified Reality Theory”).  I am merely providing an ocean boiling metaphor as a means to relate to the idea of optimizing the efficiency of a change process via recursive distribution.

Perhaps this is why we see fractal patterns all over the universe – similar structures at different scales imply an underlying recursive process at work.

And, after all, wouldn’t we expect the universal consciousness to be pretty efficient after all these years?

flame-fractal400