You Are Not Your Body

The debate rages on, but those of us who have done the research know which side is true.

We are NOT our bodies.

I am posting this as a reference to all of the excellent scientific research that has been done around this topic so that I can easily refer to it during future blog posts.  For example…

Gary Schwartz, HaConsciousness185rvard-educated professor of psychology, medicine, neurology, psychiatry, and surgery at the University of Arizona, has done extensive research in peer-reviewed journals and several books, such as “The Afterlife Experiments: Breakthrough Scientific Evidence of Life After Death”, where he states: “consciousness exists independently of the brain. It does not depend upon the brain for its survival. Mind is first, the brain is second. The brain is not the creator of mind; it is a powerful tool of the mind. The brain is an antenna/receiver for the mind, like a sophisticated television or cell phone.”

– Here are 290 Scientific papers on NDEs, such as: K. Ring and M. Lawrence, Further evidence for veridical perception during near-death experiences, Journal of Near-Death Studies, 11 (1993), pp. 223-229, which provide evidence and support for the theory that consciousness is separate from the brain.

– Research by the University of Virginia School of Medicine Division of Perceptual Studies includes a compilation of 12 books on reincarnation, 39 articles and research papers, 3 books on NDEs, and 71 articles and research papers, all supporting the evidence that we are not our bodies.

– “Irreducible Mind: Toward a Psychology for the 21st Century” was written by six interdisciplinary scientists who present years of evidence and thought that lead to the conclusion that “the mind as an entity independent of the brain or body.”

– Cardiologist Pim van Lommel’s presents 20 years of research and supporting scientific data on Near Death Experiences in his book “Consciousness Beyond Life: The Science of the Near-Death Experience.” “Ultimately, we cannot avoid the conclusion that endless consciousness has always been and always will be, independent of the body.”

– Harvard-educated neurosurgeon Eben Alexander explains in this article about his new book that “the brain itself doesn’t produce consciousness.” “it is, instead, a kind of reducing valve or filter, shifting the larger, nonphysical consciousness that we possess in the nonphysical worlds down into a more limited capacity for the duration of our mortal lives.”

– Kenneth Ring is a Professor Emeritus of Psychology at the University of Connecticut.  In his new book, “Mindsight: Near-Death and Out-of-Body Experiences in the Blind”, he documents 31 cases of blind people who had OBEs and NDEs who not only gained “knowledge of facts they could only have learned through a faculty like vision”, but there were also relevant eyewitnesses who corroborate their testimonies.

There is much more – this barely scratches the surface.  Don’t take my word for it, do your own research.  If you maintain an open mind, you will find that there is a boatload of supporting evidence for a separate brain and consciousness.  And pretty much no evidence to the contrary.

And yet, the idea is heretical in scientific circles.  Because it is not understood, it is scary to the closed-minded.

The Power of Intuition in the Age of Uncertainty

Have you ever considered why it is that you decide some of the things that you do?

Like how to divide your time across the multiple projects that you have at work, when to discipline your kids, what to do on vacation, who to marry, what college to attend, which car to buy?

The ridiculously slow way to figure these things out is to do an exhaustive analysis on all of the options, potential outcomes and probabilities.  This can be extremely difficult when the parameters of the analysis are constantly changing, as is often the case.  Such analysis is making use of your conscious mind.

The other option is to use your subconscious mind and make a quick intuitive decision.

We who have been educated in the West, and especially those of us who received our training in engineering or the sciences, are conditioned to believe that “analysis” represents rigorous logical scientific thinking and “intuition” represents new age claptrap or occasional maternal wisdom.  Analysis good, intuition silly.

This view is quite inaccurate.

According to Gary Klein, ex-Marine, psychologist, and author of the book “The Power of Intuition: How to Use Your Gut Feelings to Make Better Decisions at Work,” 90% of the critical decisions that we make are made by intuition in any case.  Intuition can actually be a far more accurate and certainly faster way to make an important decision.  Here’s why…

Consider the mind to be composed of two parts – conscious and subconscious.  Admittedly, this division may be somewhat arbitrary, but it is also realistic.

The conscious mind is that part of the mind that deals with your current awareness (sensations, perceptions, memories, feelings, fantasies, etc.)  Research shows that the information processing rate of the conscious mind is actually very low.  Tor Nørretranders, author of “The User Illusion”, estimates the rate at only 16 bits per second.  Dr. Timothy Wilson from the University of Virginia estimates the conscious mind’s processing capacity to be little higher at 40 bits per second.  In terms of the number of items that can be retained at one time by the conscious mind, estimates vary from 4 – 7, with the lower number being reported in a 2008 study by the National Academy of Sciences.

Contrast that with the subconscious mind, which is responsible for all sorts of things: autonomous functions, subliminal perceptions (all of that data streaming in to your five sensory interfaces that you barely notice), implicit thought, implicit learning, automatic skills, association, implicit memory, and automatic processing.  Much of this can be combined into what we consider “intuition.”  Estimates for the information processing capacity and storage capacity of the subconscious mind vary widely, but they are all orders of magnitude larger than their conscious counterparts.  Dr. Bruce Lipton, in “The Biology of Belief,” notes that the processing rate is at least 20 Mbits/sec and maybe as high as 400 Gbits/sec.  Estimates for storage capacity is as high as 2.5 petabytes, or 2,500,000,000,000,000.

Isn’t it interesting that the rigorous analysis that we are so proud of is effectively done on a processing system that is excruciatingly slow and has little memory capacity?

Whereas, intuition is effectively done on a processing system that is blazingly fast and contains an unimaginable amount of data. (Note: as an aside, I might mention that there is actually significant evidence that the subconscious mind connects with powerful data and processing elements outside of the brain, which only serves to underscore the message of this post)

Kind of gives you a little more respect for intuition, doesn’t it?

In fact, that’s what intuition is – the same analysis that you might consider doing consciously, but doing it instead with access to far more data, such as your entire wealth of experience, and the entire set of knowledge to which you have ever been exposed.

Sounds great, right?  It might be a skill that could be very useful to hone, if possible.

But the importance of intuition only grows exponentially as time goes on.  Here’s why…

Eddie Obeng is the Professor at the School of Entrepreneurship and Innovation, HenleyBusinessSchool, in the UK.  He gave a TED talk which nicely captured the essence of our times, in terms of information overload.  The following chart from that talk demonstrates what we all know and feel is happening to us:

Image

The horizontal axis is time, with “now” being all the way to the right.  The vertical axis depicts information rate.

The green curve represents the rate at which we humans can absorb information, aka “learn.”  It doesn’t change much over time, because our biology stays pretty much the same.

The red curve represents the rate at which information is coming at us.

Clearly, there was a time in the past, where we had the luxury of being able to take the necessary time to absorb all of the information necessary to understand the task, or project at hand.  If you are over 40, you probably remember working in such an environment.  At some point, however, the incoming data rate exceeded our capacity to absorb it.  TV news with two or three rolling tickers, tabloids, zillions of web sites to scan, Facebook posts, tweets, texts, blogs, social networks, information repositories, big data, etc.  For some of us, it happened a while ago, for others; more recently.  I’m sure there are still some folks who live  simpler lives on farms in rural areas that haven’t passed the threshold yet.  But they aren’t reading this blog.  As for the rest of us…

It is easy to see that as time goes on, the ratio of unprocessed incoming information to human learning capacity grows exponentially.  What this means is that there is increasingly more uncertainty in our world, because we just don’t have the ability to absorb the information needed to be “certain”, like we used to.  Some call it “The Age of Uncertainty.”  Some refer to the need to be “comfortable with ambiguity.”

This is a true paradigm shift.  A “megatrend.”   It demands entirely new ways of doing business, of structuring companies, of planning, of living.  In my “day job”, I help companies come to terms with these changes by implementing agile and lean processes, structures, and frameworks in order for them to be more adaptable to the constantly changing environment.  But this affects all of us, not just companies.  How do we cope?

One part to the answer is to embrace intuition.  We don’t have time to use the limited conscious mind apparatus to do rigorous analysis to solve our problems anymore.  As time goes on, that method becomes less and less effective.  But perhaps we can make better use of that powerful subconscious mind apparatus by paying more attention to our intuition.  It seems to be what some of our most successful scientists, entrepreneurs, and financial wizards are doing:

George Soros said: “My [trading] decisions are really made using a combination of theory and instinct. If you like, you may call it intuition.”

Albert Einstein said: “The intellect has little to do on the road to discovery. There comes a leap in consciousness, call it intuition or what you will, and the solution comes to you, and you don’t know how or why.”  He also said: “The only real valuable thing is intuition.”

Steve Jobs said: “Don’t let the noise of others’ opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition.”

So how do the rest of us start paying more attention to our intuition?  Here are some ideas:

  • Have positive intent and an open mind
  • Go with first thing that comes to mind
  • Notice impressions, connections, coincidences (a journal or buddy may help)
  • Put yourself in situations where you gain more experience about the desired subject(s)
  • 2-column exercises
  • Meditate / develop point-focus
  • Visualize success
  • Follow your path

I am doing much of this and finding it very valuable.

Complexity from Simplicity – More Support for a Digital Reality

Simple rules can generate complex patterns or behavior.

For example, consider the following simple rules that, when programmed into a computer, can result in beautiful complex patterns akin to a flock of birds:

1. Steer to avoid crowding local flockmates (separation)
2. Steer towards the average heading of local flockmates (alignment)
3. Steer to move toward the average position (center of mass) of local flockmates (cohesion)

The pseudocode here demonstrates the simplicity of the algorithm.  The following YouTube video is a demonstration of “Boids”, a flocking behavior simulator developed by Craig Reynolds:

Or consider fractals.  The popular Mandelbrot set can be generated with some simple rules, as demonstrated here in 13 lines of pseudocode, resulting in beautiful pictures like this:

https://i0.wp.com/upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Mandel_zoom_11_satellite_double_spiral.jpg/800px-Mandel_zoom_11_satellite_double_spiral.jpg

Fractals can be used to generate artificial terrain for video games and computer art, such as this 3D mountain terrain generated by the software Terragen:

Terragen-generated mountain terrain

Conways Game of Life uses the idea of cellular automata to generate little 2D pixelated creatures that move, spawn, die, and generally exhibit crude lifelike behavior with 2 simple rules:

1. An alive cell with less than 2 or more than 4 neighbors dies.
2. A dead cell with 3 neighbors turns alive.

Depending on the starting conditions, there may be any number of recognizable resulting simulated organisms; some simple, such as gliders, pulsars, blinkers, glider guns, wickstretchers, and some complex such as puffer trains, rakes, space ship guns, cordon ships, and even objects that appear to travel faster than the maximum propagation speed of the game should allow:

Cellular automata can be extended to 3D space.  The following video demonstrates a 3D “Amoeba” that looks eerily like a real blob of living protoplasm:

What is the point of all this?

Just that you can apply some of these ideas to the question of whether or not reality is continuous or digital (and thus based on bits and rules).  And end up with an interested result.

Consider a hierarchy of complexity levels…

Imagine that each layer is 10 times “zoomed out” from the layer below.  If the root simplicity is at the bottom layer, one might ask how many layers up you have to go before the patterns appear to be natural, as opposed to artificial? [Note: As an aside, we are confusing ideas like natural and artificial.  Is there really a difference?]

The following image is an artificial computer-generated fractal image created by Softology’s “Visions of Chaos” software from a base set of simple rules, yet zoomed out from it’s base level by, perhaps, six orders of magnitude:

softology-hybrid-mandelbulb

In contrast, the following image is an electron microscope-generate image of a real HPV virus:

b-cell-buds-virus_c2005AECO

So, clearly, at six orders of magnitude out from a fundamental rule set, we start to lose the ability to discern “natural” from “artificial.”  Eight orders of magnitude should be sufficient to make natural indistinguishable from artificial.

And yet, our everyday sensory experience is about 36 orders of magnitude above the quantum level.

The deepest level that our instruments can currently image is about 7 levels (10,000,000x magnification) below reality.  This means that if our reality is based on bits and simple rules like those described above, those rules may be operating 15 or more levels below everyday reality.  Given that the quantum level is 36 levels down, we have at least 21 orders of magnitude to play with.  In fact, it may very well be possible that the true granularity of reality is below the quantum level.

In any case, it should be clear to see that we are not even closed to being equipped to visually discern the difference between living in a continuous world or a digital one consisting of bits and rules.

My Body, the Avatar

Have you ever wondered how much information the human brain can store?  A little analysis reveals some interesting data points…

The human brain contains an estimated 100 trillion synapses.  There doesn’t appear to be a finer level of structure to the neural cells, so this represents the maximum number of memory elements that a brain can hold.  Assume for a moment that each synapse can hold a single bit; then the brain’s capacity would be 100 trillion bits, or about 12.5 terabytes. There may be some argument that there is actually a distribution of brain function, or redundancy of data storage, which would reduce the memory capacity of the brain.  On the other hand, one might argue that synapses may not be binary and hence could hold somewhat more information.  So it seems that 12.5 TB is a fairly good and conservative estimate.

It has also been estimated (see “On the Information Processing Capabilities of the Brain: Shifting the Paradigm” by Simon Berkovich) that, in a human lifetime, the brain processes 3 million times that much data.  This all makes sense if we assume that most (99.99997%) of our memory data is discarded over time, due to lack of need.

But then, how would we explain the exceptional capabilities of autistic savants, or people with hyperthymesia, or eidetic memory (total recall).  It would have to be such that the memories that these individuals retrieve can not all be stored in the brain at the same time.  In other words, memories, or the record of our experiences, are not solely stored in the brain.  Some may be, such as those most recently used, or frequently needed.

Those who are trained in Computer Science will recognize the similarities between these characteristics and the idea of a cache memory, a high speed storage device that stores the most recently used, or frequently needed, data for quick access.

As cardiologist and science researcher Pim van Lommel said, “the computer does not produce the Internet any more than the brain produces consciousness.”

Why is this so hard to believe?

After all, there is no real proof that all memories are stored in the brain.  There is only research that shows that some memories are stored in the brain and can be triggered by electrically stimulating certain portions of the cerebral cortex.  By the argument above, I would say that experimental evidence and logic is on the side of non-local memory storage.

In a similar manner, while there is zero evidence that consciousness is an artifact of brain function, Dr. van Lommel has shown that there is extremely strong evidence that consciousness is not a result of brain activity.  It is enabled by the brain, but not seated there.

These two arguments – the non-local seat of consciousness and the non-local seat of memories are congruent and therefore all the more compelling for the case that our bodies are simply avatars.

Yesterday’s Sci-Fi is Tomorrow’s Technology

It is the end of 2011 and it has been an exciting year for science and technology.  Announcements about artificial life, earthlike worlds, faster-than-light particles, clones, teleportation, memory implants, and tractor beams have captured our imagination.  Most of these things would have been unthinkable just 30 years ago.

So, what better way to close out the year than to take stock of yesterday’s science fiction in light of today’s reality and tomorrow’s technology.  Here is my take:

yesterdaysscifi

Abiotic Oil or Panspermia – Take Your Pick

Astronomers from the University of Hong Kong investigated infrared emissions from deep space and everywhere they look they find signatures of complex organic matter.

You read that right.  Complex organic molecules; the kind that are the building blocks of life!

How they are created in the stellar infernos is a complete mystery.  The chemical structure of these molecules is similar to that of coal or oil, which, according to mainstream science, come from ancient biological material.

So, there seem to be only two explanations, each of which has astounding implications.

One possibility is that the molecules responsible for these spectral signatures are truly organic, in the biological “earth life” sense of the world.  I don’t think I have to point out the significance of that possibility.  It would certainly give new credence to the panspermia theory, suggesting that we are but distant relatives or descendents of life forms that permeate the universe.  ETs are our brothers.

The other possibility is that these molecules are organic but not of biological origin.  Instead, they are somehow created within the star itself.  Given that they resemble organic molecules in coal and oil, it would seem to indicate that if such molecules can be generated non-biologically in stars, and the earth was created from the same protoplanetary disk that formed our sun, oil and coal are probably also not created from biological organic material.

In other words, this discovery seems to lend a lot of support to the abiotic oil theory.

That or we have evidence that we are not alone.

Either way, a significant find.

Buried in the news.

There is no “Now.” But there will be.

One of our long time Forum Members posted an excellent question: “Is there really a ‘now'”?  The mystics tell us that there is only NOW.  But I suspect they are referring to a state of reality or a state of consciousness that one only reaches when they die or if they sit on top of a mountain contemplating their naval for a dozen or so years and get really lucky.

Back in the reality that we all know and love, I got to thinking about the reality that we all know and love.  And came to the conclusion that there is no NOW.  Here’s why:

Our interpretation of the present is really based on our short term memory, which lasts some 30 seconds or so. If we had no short term memory, we would not be able to think, plan, procreate, remember to eat, etc. In short, we would perish.

However, what is in short term memory is not NOW, it is the past. Now can only be defined as an instant. Or, in mathematical terms, it is t=0, or the limit as “delta t” approaches zero at t=0. As an absolute, or an infinite concept, it could only exist in an infinite universe, which also must be continuous. As I “tend” to believe that our universe is not infinite and is bound by the attributes of the Program (see “The Universe – Solved!”), the smallest unit of time around the concept of NOW would be a clock cycle of the Program. If it is the Planck time, then it is 10E-43 seconds (although it could be other resolutions). In any case, it has a duration, so it can’t be instantaneous or absolute. Therefore, there is no NOW, only our PERCEPTION of now, which is our very short term memory.

That said, in the other realm, where consciousness “probably” goes after death, everything is NOW, as the mystics say. That is because there is no physical stuff, no brain, no short term memory, and therefore no need for time as a dimension. Hence, everything could only be NOW.

If so, no need to even fear the “five-point-palm-exploding-heart technique.”

kill-bill-guy185

Jim and Craig Venter Argue over Who is more Synthetic: Synthia or Us?

So Craig Venter created synthetic life.  How cool is that?  I mean, really, this has been sort of a biologists holy grail for as long as I can remember.  Of course, Dr. Venter’s detractors are quick to point out that Synthia, the name given to this synthetic organism, was not really built from scratch, but sort of assembled from sub-living components and injected into a cell where it could replicate.  Either way, it is a huge step in the direction of man-made life forms.  If I were to meet Dr. Venter, the conversation might go something like this:

Jim: So, Dr. Venter, help me understand how man-made your little creation really is.  I’ve read some articles that state that while your achievement is most impressive, the cytoplasm that the genome was transplanted to was not man made.

Craig: True dat, Jim.  But we all need an environment to live in, and a cell is no different.  The organism was certainly man made, even if its environment already existed.

Jim: But wait a minute.  Aren’t we all man-made?  Wasn’t that the message in those sex education classes I took in high school?

Craig: No, the difference is that this is effectively a new species, created synthetically.

Jim: So, how different is that from a clone?  Are they also created synthetically?

Craig: Sort of, but a clone isn’t a new species.

Jim: How about genetically modified organisms then?  New species created synthetically?

Craig: Yes, but they were a modification made to an existing living organism, not a synthetically created one.

Jim: What about that robot that cleans my floor?  Isn’t that a synthetically created organism?

Craig: Well, maybe, in some sense, but can it replicate itself?

Jim: Ah, but that is just a matter of programming.  Factory robots can build cars, why couldn’t they be programmed to build other factory robots?

Craig: That wouldn’t be biological replication, like cell division.

Jim: You mean, just because the robots are made of silicon instead of carbon?  Seems kind of arbitrary to me.

Craig: OK, you’re kind of getting on my nerves, robot-boy.  The point is that this is the first synthetically created biological organism.

Jim: Um, that’s really cool and all, but we can build all kinds of junk with nanotech, including synthetic meat, and little self-replicating machines.

Craig: Neither of which are alive.

Jim: Define alive.

Craig: Well, generally life is anything that exhibits growth, metabolism, motion, reproduction, and homeostasis.

Jim: So, a drone bee isn’t alive because it can’t reproduce?

Craig: Of course, there are exceptions.

Jim: What about fire, crystals, or the earth itself.  All of those exhibit your life-defining properties.  Are they alive?

Craig: Dude, we’re getting way off topic here.  Let’s get back to synthetic organisms.

Jim: OK, let’s take a different tack.  Physicist Paul Davies said that Google is smarter than any human on the planet.  Is Google alive?  What about computer networks that can reconfigure themselves intelligently.

Craig: Those items aren’t really alive because they have to be programmed.

Jim: Yeah, and what’s that little code in Synthia’s DNA?

Craig: Uhhh…

Jim: And how do you know that you aren’t synthetic?  Is it at all possible that your world and all of your perceptions could be completely under programmed control?

Craig: I suppose it could be possible.  But I highly doubt it.

Jim: Doubt based on what? All of your preconceived notions about reality?

Craig: OK, let’s say we are under programmed control.  So what?

Jim: Well, that implies a creator.  Which in turn implies that our bodies are a creation.  Which makes us just as synthetic as Synthia.  The only difference is that you created Synthia, while we might have been created by some highly advanced geek in an other reality.

Craig: Been watching a few Wachowski Brothers movies, Jim?

Jim: Guilty as charged, Craig.

CraigVenterGod

DNA: Evidence of Intelligent Design or Byproduct of Evolution?

DNA is a self-replicating nucleic acid that supposedly encodes the instructions for building and maintaining cells of an organism.  With an ordered grouping of over a billion chemical base pairs which are identical for each cell in the organism, the unique DNA for a particular individual looks kind of like statements in a programming language.  This concept is not lost on Dr. Stephen Meyer (Ph.D., history and philosophy of science, Cambridge University), who posits that the source of information must be intelligent and therefore DNA, as information, is evidence of Intelligent Design.  He argues that all hypotheses that account for the development of this digital code, such as self-organization and RNA-first, have failed.  In a well publicized debate with Dr. Peter Atkins (Ph.D., theoretical chemistry, University of Leicester), a well known atheist and secular humanist, Atkins counters that information can come from natural mechanisms.  Sadly, Atkins resorts to insults and name calling, so the debate is kind of tainted, and he never got a chance to present his main argument in a methodical way because he let his anger get the best of him.  But it raised some very interesting questions, which I don’t think either side of the argument has really gotten to the bottom of.

ID’ers trot out the Second Law of Thermodynamics and state that the fact that simple molecules can’t self replicate without violating that Law proves Intelligent Design.  But it doesn’t really.  The Second Law applies to the whole system, including many instances of increased disorder weighed against the fewer instances of increased order.  Net net, disorder TENDs to increase, but that doesn’t mean that there can’t be isolated examples of increased order in the universe. That seems to leave the door open to the possibility that one such example might be the creation of self-replicating molecules.

Another point of contention is about the nature of information, such as DNA.  Meyer is wrong if he is making a blanket assertion that information can only come from intelligence.  I could argue that, given a long enough period of time, if you leave a typewriter outdoors, hailstones will ultimately hit the keys in an order that creates recognizable poetry.  So the question boils down to this – was there enough time and proper conditions for evolutionary processes to create the self-replicating DNA molecule from non-self replicating molecules necessary for creating the mechanism for life?

The math doesn’t look good for the atheists.  Dr. Robert L. Piccioni, Ph.D., Physics from Stanford says that the odds of 3 billion randomly arranged base-pairs matching human DNA is about the same as drawing the ace of spades one billion times in a row from randomly shuffled decks of cards.  Harold Morowitz, a renowned physicist from Yale University and author of Origin of Cellular Life  (1993), declared that the odds for any kind of spontaneous generation of life from a combination of the standard life building blocks is one chance in 10E100000000000 (you read that right, that’s 1 followed by 100,000,000,000 zeros).  Famed British Royal Astronomer Sir Fred Hoyle, proposed that such odds were one chance in 10E40000, or roughly “the same as the probability that a tornado sweeping through a junkyard could assemble a 747.”  By the way, scientists generally set their “Impossibility Standard” at one chance in 10E50 (1 in a 100,000 billion, billion, billion, billion, billion).  So, the likelihood that life formed via combinatorial chemical evolution (the only theory that scientists really have) is, for all intents and purposes, zero.

Atkins, Dawkins, and other secular humanists insist that materialism and naturalism are pre-supposed and that there is no argument for the introduction of the logic of intelligence into science.  That sounds to me to be pretty closed minded, and closes the door a priori on certain avenues of inquiry.  Imagine if that mentality were applied to string theory, a theory which has no experimental evidence to start with.  One has to wonder why science is so illogically selective with respect to the disciplines that it accepts into its closed little world.

My interest in this goes beyond this specific debate.  I have a hobby of collecting evidence that our reality is programmed.  I’m not sure yet whether DNA has a place in that collection yet.  It will definitely need a little more thought.

 

dna_500

Entropy and Puppies, like a Hand and a Glove

Ah yes, the good old 2nd Law of Thermodynamics. The idea that the total disorder of a system, e.g. the universe, always increases.  Or that heat always flows from hot to cold.  It’s why coffee always gets cold, why money seems to dissipate at a casino, why time flows forward, why Murphy had a law, why cats and dogs don’t tend to clean up the house.

Ultimately, due to this rather depressing physical law, the universe will die by “heat death,” where it reaches a state of absolute zero, no more heat, no motion of particles.  Don’t worry, that’s not predicted for another 10^100 (or, a Googol) years.  But, I always wondered, is it always always the case, or can entropy decrease in certain circumstances?

Got a spare fortnight? Google “violations of the second law of thermodynamics.”  Personally, I rather like Maxwell’s idea that it is a statistical argument, not an absolute one. “Maxwell’s Demon” is that hypothetical device that funnels hot molecules in one directions and cold ones in the opposite, thereby reversing the normal flow of heat.  Could a nanotech device do that some day?  Yes, I know that there has to be energy put into the system for the device to do its work, thereby increasing the size of the system upon which the 2nd law holds.  But, even without the demon, aren’t there statistical instances of 2nd Law violation in a closed system?  Not unlike the infinitesimal probability that someone’s constituent atoms suddenly line up in such a manner that they can walk through a door (see recent blog topic), so could a system become more coherent as time moves to the future.

What about lowering temperature to the point where superconductivity occurs?  Isn’t that less random than non-superconductivity.  One might argue that the energy that it takes to become superconductive exceeds the resulting decrease in entropy.  However, I would argue that since the transition from conductive to superconductive occurs abruptly, there must be a time period, arbitrarily small, during which you would watch entropy decrease.

There are those who cite life and evolution as examples of building order out of chaos.  Sounds reasonable to me, and the arguments against the idea sound circular and defensive.  However, it all seems to net out in the end.  Take a puppy, for instance.  Evolutionary processes worked for millions of years to create the domestic dog.  Entropy-decreasing processes seem to responsible for the formation of a puppy from its original constituents, sperm and an egg.  But then the puppy spends years ripping up your carpet, chewing the legs of the furniture and ripping your favorite magazines into little pieces; in short, increasing the disorder of the universe.  Net effect?  Zero.

shakespeareandleash185