Dark Matter, Parallel Worlds, and Bizarro Neighbors

It turns out that it is very likely that an unseen world is occupying the same space that we do.  What goes on there?  Are there Bizarro humans living with Bizarro pets in Bizarro homes, working at Bizarro jobs, just like we do?

Astronomers who have studied the motion of galaxies and clusters of galaxies have noticed that such large astronomical objects rotate too fast for the amount of matter inferred by their size, distance, and luminosity.  Further, in order for the universe to be flat, as it is observed, there must be much more matter than is currently visible.  In fact, by some estimates, observable matter only accounts for less than 1% of the mass of the universe.  The rest, therefore, must be dark – hence the name “dark matter.”  Many varieties of dark matter have been proposed, including exotic dark matter consisting of various high energy loose particles such as neutrinos and theoretical particles called WIMPs (weakly interacting massive particles).  Also in the menu of candidates for dark matter are big chunky masses called MACHOs (massive compact halo objects – don’t astronomers have a great sense of humor?), which include brown dwarfs, planets, or black holes.  Certain studies of the structure of the early universe, however, have demonstrated that MACHOs can not account for more than a fraction of the total dark matter.

As a result, WIMPs are winning the battle.  Anomalous scientific results from Results from ATIC (Advanced Thin Ionization Calorimeter in Antarctica, PAMELA (an Italian space mission called a Payload for AntiMatter Exploration and Light-nuclei Astrophysics), and INTEGRAL (a European Gamma Ray satellite, INTErnational Gamma-Ray Astrophysics Laboratory) ) are starting to narrow down the kinds of particle that could be responsible.  See Kaluza-Klein particles for more (also see New Scientist article).

Interesting, this has some fascinating implications.  The fact that WIMPs don’t interact means we don’t even know they are there.  Because the measurements imply that they are integrated into our space just like ordinary matter is, they are effectively right next to us and we have no way of detecting them.

But what form are they in?  Is it a sea of particles?  Or do they clump like ordinary matter?  The answer appears to be the latter.  According to Hubble data, dark matter clumps at all magnitudes (see Science Daily article), which means it looks pretty much like ordinary matter.

What does all this mean?  All indications are that there is tons (figuratively speaking) of invisible, undetectable material existing right in our own space.  In fact, by all accounts, there is about 7 times as much as our common ordinary matter.  For all we know, there are dark desks, dark Volvos, and dark versions of Donald Trump’s hair.

intergalactic space Bizarro Trump

Inferring the Existence of the Soul?

The following is an excerpt from my book, “The Universe – Solved!”  It is a thought experiment that seems to prove the existence of the soul…

Given that in the primate world there is a continuum of neural complexity from lemurs to humans, it is safe to say that somewhere there is a species with roughly half the neural complexity of a human.  Per the atheistic way of thinking, such a species would therefore have half the consciousness of a human.  Let’s arbitrarily define the level of neural complexity on a scale from 0 to 1, 1 being human.  Out primate friend would then have a neural complexity, and therefore, consciousness, of .5.  Later in this chapter we will present the strong evidence of the distributed nature of the brain; namely, that there is no single specific place where a memory resides or a specific component of a visual image is captured.  From the cases involving brain tumors and brain loss due to injuries, it is clear that we could remove half of a human’s brain and that person would continue to be conscious.  Maybe only half as conscious as before, not unlike waking up on a beach in Cancun during spring break after a night of bad tequila.

Here comes the thought experiment part.  Imagine the possibility of a brain transplant.  It’s not hard to do, given that the brain is simply an organ, like the many others that are routinely subject to transplant with today’s surgical techniques.  There are certainly a lot more connections to a brain compared with, say, a liver, but it’s really just a matter of time before it is possible and then ultimately perfected.  Just as the cloning procedure is working its way up the species complexity scale (lab mice, sheep, humans), so will the brain transplant procedure.  A head transplant, for example, was performed by Case Western Reserve University neurosurgeon Dr. Robert White on a rhesus monkey in 1970.  It survived for eight days and exhibited many normal functions.  Cross-species transplants, also known as xenotransplants, have long since been proven to be possible, with chimpanzee kidneys in humans, pig livers in humans, cynomolgus monkey hearts in baboons, and baboon hearts in humans all achieving some level of success.  The main reasons that experimentation and advances in that field are slow to progress are the controversial ethical issue (it is right for pigs to become organ factories?) and the fear of cross-species viral infections.  But, ethical and safety issues aside, it is reasonable to assume that with sufficient technology, it will be possible to transplant a human brain or portion thereof into our primate that nominally has a .5 consciousness level.  Let’s further imagine that the process could become fairly straightforward, like plugging a new motherboard into a computer.  As long as the interfaces line up from a physical and networking standpoint, the procedure is “plug and play.”

So let’s imagine our human subject, Nick, and 2 lesser primates, Magilla and Kong.  We remove Nick’s brain and attach it to Magilla’s body.  Nick should retain his memories and consciousness, but feel really different, since his sensory input is completely new.  We would have to conclude that he maintained a continuous, albeit altered, stream of identity.  If Karl Pribham and others are right, we could theoretically put half of Nick’s brain into Magilla and the other half in Kong.  Where is his identity now?  Which body does the old Nick feel that he is in?  If we took the biological reductionist point of view, we would have to say that his consciousness is in both primates.  That must be very confusing, receiving two separate sets of sensory stimuli and two distinct developing sets of new memories.  Given that the state of the two primates is fairly consistent with the state of two similar natural primates, namely that they each have a brain of .5 neural complexity, why should there be a single conscious identity occupying both bodies in the case of Kong and Magilla, but two distinct identities in the natural case?  My answer is simple, invoking Occam’s Razor.  Nick’s soul simply chose which primate to move into along with his brain.  Alternately, his soul could have said, “This is ridiculous.  I’m returning to the spirit domain.  Let some other souls fight over those abominations.”

baboon brain transplant

String Stars – You Heard It Here First!

I remember the days when we were all amazed at the concept of a white dwarf star; the final evolutionary state of most stars after their gravitational collapse.  It can’t collapse any further due to something called electron degeneracy pressure.  I always visualized it by imagining atoms jammed together to the point where their electron shells were nearly touching.  A white dwarf’s density was such that a teaspoonful would weigh as much as an elephant.  They are about the size of the earth.

But there was an even more bizarre concept – the neutron star.  Still more dense, it was proposed by Baade and Zwicky in 1933, a year after the neutron was discovered.  For a star that has more mass than the Chandrasekhar limit, or 1.44 solar masses, when it collapses at the end of its life, its density is even enough to overcome the forces that hold a white dwarf together.  In the late 60’s, one was actually observed and by the 70’s, the concept was considered to be well accepted by most astronomers.  Neutron stars can’t collapse any further due to the Pauli exclusion principle.  I always visualized it by imagining neutrons jammed together to the point where they were nearly touching.  A neutron star is maybe a billion times denser than a white dwarf.  They are about the size of Manhattan.

But then, I thought, what if the mass of the star was so large that even the neutrons collapsed into their constituents, quarks?  Well, I don’t know if anyone else had such an idea and now, doing a little web research, I can’t seem to put my finger on when such a concept was first proposed.  But I’m starting to see a buzz about quark stars.  In 2008, astrophysicists Denis Leahy and Rachid Ouyed proposed the quark star as the result of super-supernovae (http://www.space.com/scienceastronomy/080603-aas-neutron-quark.html).  And now, astrophysicists from the University of Hong Kong have presented evidence of a quark star in super-supernova SN 1987A (http://www.newscientist.com/article/mg20126964.700-quark-star-may-hold-secret-to-early-universe.html)

So, now I wonder, what next?  Quarks probably have their own sub-quark constituents.  String theorists say quarks are made of vibrating strings.  If so, could a massive enough star, or a dense enough hunk of matter overcome “quark degeneracy” and collapse into a “String Star?”  A star consisting of string material that is so compressed it can vibrate anymore?

So I searched the web and am proud to say that I have found no such proposal.  So, I hereby claim it.  Someday, someone will lay claim to discovering a string star.  You heard it here, first.  🙂

 

string theory

First Evidence of the Multiverse

A recent article in New Scientist offers us cosmology enthusiasts an exciting new possibility – evidence of another universe.  It seems that some recent measurements demonstrate that certain clusters of galaxies are moving in the same direction against background space, which violates the idea that the universe should be the same in all directions and instead implies a clump of excess density outside of our observational range.  Like another universe, maybe?

Also known as dark flow, the latest in the series of “dark” monikers, joining predecessors “matter” and “energy,” scientists are baffled at its cause.  Just as Copernicus, Galileo, and Hubble were baffled before us.  70 years ago, Edwin Hubble said “At the last dim horizon, we search among ghostly errors of observations for landmarks that are scarcely more substantial. The search will continue. The urge is older than history. It is not satisfied and it will not be oppressed.”

No surprise to us programmed realitists.  As long as our species is blissfully unaware of the big picture, the cosmic programmers must keep at least one step ahead of our best instrumentation and keep us fascinated with hints of what lurks just beyond our horizon.

bubble universes

Noise in Gravity Wave Detector may be first experimental evidence of a Programmed Reality

GEO600 is a large gravitational wave detector located in Hanover, Germany.  Designed to be extremely sensitive to fluctuations in gravity, its purpose is to detect gravitational waves from distant cosmic events.  Recently, however, it has been plagued by inexplicable noise or graininess in its measurement results (see article in New Scientist).  Craig Hogan, director of Fermilab’s Center for Particle Astrophysics, thinks that the instrument has reached the limits of spacetime resolution and that this might be proof that we live in a hologram.  Using physicists Leonard Susskind and Gerard ‘t Hooft’s theory that our 3D reality may be a projection of processes encoded on the 2D surface of the boundary of the universe, he points out that, like a common hologram, the graininess of our projection may be at much larger scales than the Planck length (10-35 meters), such as 10-16meters.

Crazy?  Is it any stranger than living in 10 spatial dimensions, living in a space of parallel realities, invisible dark matter all around us, reality that doesn’t exist unless observed, or any of a number of other mind-bending theories that most physicists believe?  In fact, as fans of this website are well aware, such experimental results are no surprise.  Just take a look at the limits of resolution in my Powers of 10 simulation in the Programmed Reality level: Powers of 10.  I arbitrarily picked 10-21 meters, but it could really be any scale where it happens.

If our universe is programmed, however, it is probably done in such a way as to be unobservable for the most part.  Tantalizing clues like GEO600 noise give us all something to speculate about.  But don’t be surprised if the effect goes away when the programmers apply a patch to improve the reality resolution for another few years.

Thanks to my photogenic cat, Scully, for providing an example of grainy reality…
Scully, various resolutions

The Singularity Cometh? Or not?

There is much talk these days about the coming Singularity.  We are about 37 years away, according to Ray Kurzweil.  For some, the prospect is exhilarating – enhanced mental capacity, ability to experience fantasy simulations, immortality.  For others, the specter of the Singularity is frightening – AI’s run amok, all Terminator-like.  Then there are those who question the entire idea.  A lively debate on our forum triggered this post as we contrasted the position of transhumanists (aka cybernetic totalists) and singularity-skeptics.

For example, Jaron Lanier’s “One Half of a Manifesto” published in Wired and edge.org, suggests that our inability to develop advances in software will, at least for now, prevent the Singularity from happening according to the Moore’s Law pace.  One great quote from his demi-manifesto: “Just as some newborn race of superintelligent robots are about to consume all humanity, our dear old species will likely be saved by a Windows crash. The poor robots will linger pathetically, begging us to reboot them, even though they’ll know it would do no good.”  Kurzweil countered with a couple specific examples of successful software advances, such as speech recognition (which is probably due more to algorithm development than software techniques).

I must admit, I am also disheartened by the slow pace of software advances.  Kurzweil is not the only guy on the planet to have spent his career living and breathing software and complex computational systems.  I’ve written my share of gnarly assembly code, neural nets, and trading systems.  But, it seems to be that it takes almost as long to open a Word document, boot up, or render a 3D object on today’s blazingly fast PCs as it did 20 years ago on a machine running at less than 1% of today’s clock rate.  Kurzweil claims that we have simply forgotten: “Jaron has forgotten just how unresponsive, unwieldy, and limited they were.”

So, I wondered, who is right?  Are there objective tests out there?  I found an interesting article in PC World that compared the boot-up time from a 1981 PC to that of a 2001 PC.  Interestingly, the 2001 was over 3 times slower (51 seconds for boot up) than its 20-year predecessor (16 seconds).  My 2007 Thinkpad – over 50 seconds.  Yes, I know that Vista is much more sophisticated than MS-DOS and therefore consumes much more disk and memory and takes that much more time to load.  But really, are those 3D spinning doodads really helping me work better?

Then I found a benchmark comparison on the performance on 6 different Word versions over the years.  Summing 5 typical operations, the fastest version was Word 95 at 3 seconds.  Word 2007 clocked in at 12 seconds (in this test, they all ran on the same machine).

In summary, software has become bloated.  Developers don’t think about performance as much as they used to because memory and CPU speed is cheap.  Instead, the trend in software development is layers of abstraction and frameworks on top of frameworks.  Developers have become increasingly specialized (“I don’t do “Tiles”, I only do “Struts”) and very few get the big picture.

What does this have to do with the Singularity?  Simply this – With some notable exceptions, software development has not even come close to following Moore’s Law in terms of performance or reliability.  Yet, the Singularity predictions depend on it.  So don’t sell your humanity stock anytime soon.

 

Mac Guy, PC Guy

Does the Ethane lake on Titan support the abiotic oil theory?

Although shallow oil wells were drilled in China as early as the 4th century, the first commercial oil well was drilled in Canada in 1858 at the height of the industrial revolution.  Since then our use of and reliance upon it has skyrocketed.  Also since then has been a continuous debate on the origin of oil.  In one corner, weighing in at 25 billion barrels a year, we have the biogenic theory, aka dead plants and animals.  In the other corner, weighing in at 900 billion gallons a year, we have the abiotic theory, aka chemical reactions inside the Earth.

The “fossil fuel” theory was first proposed by Russian scientist Mikhailo Lomonosov in 1757 who suggested that bodies of animals from prehistoric times were buried in sediments and were transformed into hydrocarbons due to extreme pressure and temperature forces over millions of years.  The argument is supported by sound biochemical processes, such as catagenesis.  In addition, the evidence of organic pollen grains in petroleum deposits implies (but does not prove) organic origin.

The abiogenic or abiotic theory actually has its origins the 1800s, when proposed by French chemist Marcellin Berthelot and Russian chemist Dmitri Mendeleev.  According to their theory, hydrocarbons are primordial in origin and were formed by non-biological processes in the earths crust and mantle.  The theory received a modern boost by Russian geologist Kudryavtsev, studying Canadian oil sources in the 1950s and Ukrainian scientist Chekaliuk, based on thermodynamic calculations in the 1960’s, who both arrived at the same conclusion.  Esteemed and late planetary scientist Thomas Gold from Cornell University (from whom I once took a course in astronomical theories), added to the evidence in his book “The Deep Hot Biosphere.”  The theory has also attained laboratory support via experiments at Gas Resources Corporation in Houston, Texas which produced octane and methane by subjecting marble, iron oxide, and water, to temperature and pressure conditions similar to that 60 miles below the surface of the earth.  Also, deep drilling around the world has discovered oil at depths and in places where there should never have been biological remains.  Referring to natural gas wells drilled by the GHK Company in Oklahoma at 30,000 feet and Japanese wells at 4300 meters, Dr. Jerome Corsi (political scientist with a Ph.D. from Harvard University) noted:

“Even those who might stretch to argue that even if no dinosaurs ever died in sedimentary rock that today lies 30,000 feet below the surface, might still argue that those levels contain some type of biological debris that has transformed into natural gas. That argument, a stretch at 30,000 feet down, is almost impossible to make for basement structure bedrock. Japan’s Nagaoka and Niigata fields produce natural gas from bedrock that is volcanic in nature. What dinosaur debris could possibly be trapped in volcanic rock found at deep-earth levels?”

Some oil reserves even seem to have the ability to be automatically refilled, like a drink at a burger joint.  Gulf of Mexico oil field Eugene Island 330, for example, saw its production drop from 15,000 barrels a day in 1973 to 4,000 barrels a day in 1989, and then suddenly spontaneously reversed and was pumping 13,000 barrels of a “different aged” crude in 1999.  In fact, according to Christopher Cooper of the Wall Street Journal, “between 1976 and 1996, estimated global oil reserves grew 72%, to 1.04 trillion barrels.”  Considering the doubling of reserves in the Middle East alone, University of Tulsa professor Norman Hyne noted that “it would take a pretty big pile of dead dinosaurs and prehistoric plants to account for the estimated 660 billion barrels of oil in the region”

The argument is all very interesting and gets quite political as one might imagine.  But my interest revolves more around the basic question of why oil is even there at all.  Both sides propose some fairly complex theories to account for the very existence of petroleum, let alone its uncanny ability to refill known reserves automatically.  Doesn’t it almost seem like it was placed there just for our use? (see much more on Programmed Reality elsewhere on this site)

And now, there is the fact that some hydrocarbons, like methane, are known to occur throughout the solar system on supposedly lifeless planets.  Take, for example, the most recent announcement in “Nature” and “Scientific American” that a Lake Ontario-sized lake has been discovered on Saturn’s moon Titan that is composed of hydrocarbons, specifically liquid ethane.  By some estimates, the contents of this lake could be equivalent to as much as 9 trillion barrels of oil.  Even NASA suggests that Titan could have “hundreds of times more liquid hydrocarbons than all the known oil and natural gas reserves on Earth.”

Anybody see anything wrong with this picture?  Were there dinosaurs on Titan?

Doubtful!

Therefore, it seems to me, Titan gives the abiotic theory of oil a fairly sizeable boost.

(apologies to those who have read my book, “The Universe-Solved”, as much of the background on this topic come verbatim therefrom)

Titan

Roger Penrose Agrees with Me: 2+2 may not = 4!

One of the sections of “The Universe – Solved!” that generated a bit of controversy was my assertion that there is really nothing that we can know with conviction to be true.  An exerpt:

“2+2=4?  Not in Base 3, where 2+2=11.  In Base 10 (or any base >4), 2+2=4 by convention, but only in an abstract way, and not necessarily always true in the real world.  If you add 2 puddles of water to 2 puddles of water, you still have 2 (albeit larger) puddles of water.  For a more conventional example, a 2-mile straight line laid end-to-end with another 2-mile straight line will not add up to exactly 4 miles in length due to relativity and the curvature of space-time in all locales.  Therefore, 2+2=4 can not be universally true.”

In addition, You have no way of knowing whether the convention that 2+2=4 is only true in the false reality that we think we are in, but not in the real one.  Again, from the book: “So, maybe all we can know for sure is what is happening to us at this exact instant.  Then again, how do we know that we aren’t in a dream right now???  So, the set of things that are 100% true is simply the null set!”

Some readers have argued with these assertions.

So, imagine my pleasure when I read the following quote in the July 26 – August 1 issue of New Scientist magazine by esteemed mathematician and physicist Roger Penrose: “”Do we know for certain that 2 plus 2 equals 4?  Of course we don’t.  Maybe every time everybody in the whole world has ever done that calculation and reasoned it through, they’ve made a mistake.  Maybe it isn’t 4, it’s really 5.  There is a very, very small chance that this has happened.”  His argument is based on the logic of reason, which was different than my argument, but the result was the same nonetheless.

Thank you, Roger, for your enlightened point of view.  I would gladly send you a free autographed book.  Please send me your address.  Smile

Roger Penrose Penrose Tiles

Reality Doesn’t Exist, according to the latest research

A team of physicists in Vienna has conducted a set of “reality” experiments that prove to a level of 80 orders of magnitude that reality doesn’t exist unless you observe it.  In other words, in case you ever doubted the Schrodinger’s Cat thought experiment, doubt no longer.  It seems that experimental evidence has confirmed that we create our own reality by looking at it, measuring it, or observing it.  The detail are here.

The results of many of recent experiments twist our perceptions of reality even more.  Studies by Helmut Schmidt, Elmar Gruber, Brenda Dunne, Robert Jahn, and others have shown, for example, that humans are actually able to influence past events (aka retropsychokinesis, or RPK), such as pre-recorded (and previously unobserved) random number sequences.  No huge surprise to me, who questions everything about our conventional views of reality.  But I still think the evidence is fascinating and probably a bit unnerving to say the least, to the majority of those out there who don’t typically consider such things.  Cause and effect, and reality are certainly not what they seem.

What could be the explanation?  Certainly, more experiments to probe the depths of reality are needed.  But that doesn’t stop us from speculating.  Once again, Programmed Reality offers a perfect explanation.  Assuming that the programmed construct can detect “observation” (which, in principle, does not appear to be that difficult of a process), all the program has to do is the following:

if(observed)
select result from a subset of coherent results
else, randomize result

For example, in the classic reality experiment, pairs of photons are generated which are “entangled” by virtue of the fact that they were generated from the same reaction.  Those photons can be separated by large distances and then a property of one of them is measured.  The act of measuring the property of one photon immediately determines the property of the other photon, even if it is so far away that it precludes “knowing” about what is happening to its twin photon because of the limitations of exceeding the speed of light.  However, in the Programmed Reality model, the properties of the two photons can be related programmatically.  Once an experiment determines one property, the program sets the other photons property accordingly.  The program is aware of the observation and could be in full control of the properties of the paired particles.

For the RPK effect…

when(observed)
set result from archive to a subset of coherent results

For an example of this effect, imagine a set of random numbers generated programmatically and stored in some sort of archive.  The archive, of course, being a product of Programmed Reality, is under full control of the program.  The archive is not observed prior to the experiment and the subjects perform mass consciousness experiments on the data.  The program measures the level of “coherence” of the consciousness in the experiment and then sets the correlation of the stored numbers according to some algorithm, formula, or table.  When the experimenters unveil the data, lo and behold, they are not truly random, but rather, appear to be affected by the consciousness experiment.  A simple software algorithm can make this work!

The interesting question, though, is “What is the motivation behind the program?”  Why would it have such an effect?  Perhaps the answer lies in the idea that sentient beings do truly create their reality.  Much like “Sim City,” where the players create their reality, perhaps our reality is created accordingly to a complex set of rules and algorithms, which include such attributes as intent and observation.

This doesn’t prove the validity of Programmed Reality, but I have to wonder, how many anomalies does the theory have to solve, for it to be seriously considered?  Wink

IQOQI Reality Test Experiment

Would it really be that bad to find life in our Solar System?

Nick Bostrom wrote an interesting article for the MIT Technology Review about how he hopes that the search for life on Mars finds nothing. In it, he reasons that inasmuch as we haven’t come across any signs of intelligent life in the universe yet, advanced life must be rare. But since conditions for life aren’t particularly stringent, there must be a “great filter” that prevents life from evolving beyond a certain point. If we are indeed alone, that probably means that we have made it through the filter. But if life is found nearby, like in our solar system, then the filter is probably ahead of us, or at least ahead of the evolutionary stage of the life that we find. And the more advanced the life form that we find, the more likely that we have yet to hit the filter, which implies ultimate doom for us.

But I wonder about some of the assumptions in this argument. He argues that intelligent ETs must not exist because they most certainly should have colonized the galaxy via von Neumann probes but apparently have not done so because we do not observe them. It seems to me, however, that it is certainly plausible that a sufficiently advanced civilization can be effectively cloaked from a far less advanced one. Mastery of some of those other 6 or 7 spatial dimensions that string theory predicts comes to mind. Or invisibility via some form of electromagnetic cloaking. And those are only early 21st century ideas. Imagine the possibilities of being invisible in a couple hundred years.

Then there is the programmed reality model. If the programmers placed multiple species in the galaxy for “players” to inhabit, it would certainly not be hard to keep some from interacting with each other, e.g. until the lesser civilization proves its ability to play nicely. Think about how some virtual reality games allow the players to walk through walls. It is a simple matter to maintain multiple domains of existence in a single programmed construct!  More support for the programmed reality model?…

(what do you think about the possibilities of life elsewhere? take our polls!)

Martian