You Are the Product of 4 Billion Years of Evolutionary Success…. Or Not

“Dude, you’re a dumb-ass.  I’ve got the perfect meme for you:”

product

Ha, yeah, that’s funny man.  Wait, what am I, the wolf?  Are you the wolf?  I mean, I woulda thought you’d go the “humans are the pinnacle of evolution” route, with our big brains and upright stance and twitter and shit.  But wolves are cool too.  I mean really, so is everything.  When you think about it, each individual organism out there in the world is the “product of 4 billion years of evolutionary success.”  (Well, more like 3.5 billion years, but what’s 13% between friends?)  Innumerable generations through unfathomable ages striving, struggling, eating, fucking; all so that bumblebee can buzz by, or the wolf can chow on that rabbit, or you and I can have this conversation.  Everything alive today is the culmination of an incomprehensibly extensive, geologic-scale lineage.

But yeah, human beings are clearly the most successful species on Earth.  Right?  How do you define “success?”  Is it sheer complexity?  We got that down (#awesome).  Okay, so what?  Remember what Stephen Jay Gould pointed out in his book Full House about the “Drunkard’s Walk” (which is also the title of a Leonard Mlodinow book about statistical randomness).  Imagine you get hammered and stumble two steps out the door of the bar.  You’re soused, so you don’t know where you’re going; you’re equally as likely to take a step forward as you are backward.  You can’t go too far back or you’ll hit the wall of the bar.  That’s like the lower limit on complexity.  So you stagger around and eventually, just by chance, you’ll end up way over at the next building, the upper limit of complexity.  Is that an achievement, you fucking lush?  Or was it just that enough time elapsed that you were able to cover all the possibilities?

No, asshole, I’m not saying that evolution is random.  Individual mutations are random, but the ones that best adapt an organism to their particular environment are the ones that get selected for.  “Particular” being the important word there.  To oversimplify, wooly coats won’t help at the Equator and wings do you no good underground.  So is any one trait “better” than another?  Depends on where you are.  Sometimes complexity even hurts you.  Snakes ditched their limbs because they likely hindered burrowing and cavefish traded their eyes for other weapons (unrelated to the Stonecutters’ machinations).  And what if the other guy is ramping up the fitness as fast as you are?  Gazelles get faster, so lions get faster, so gazelles get nimbler, etc., etc.  That doesn’t sound like progress or success.  That sounds a lot like running in place.

All right, forget about success, how about DOMINANCE! We are EVERYWHERE!  And there’s a lot of us!  Seven billion big animals running around.  We’ve filled ecologic niches from the rainforest to the tundra, but bacteria and even simpler forms of life have found ways to withstand the pressures at the bottom of the ocean and the scalding conditions in thermal springs.  They’ve obviously got us beat on pure numbers, but our big bodies don’t compare to their overall biomass, either.  Bacteria globally outweigh us by somewhere around 5,000 times.

But we win because we are FINISHED!  At the peak of our evolution, no more changes!  WRONG!  In addition to simple genetic drift, studies suggest that skin and eye colors are still changing, as well as our tolerance for lactose and wheat.  In fact, human evolution may be happening faster than ever.

WHAT DOES THIS MEAN?

Biologists actually have a very precise definition of success, and it has to do with a quantifiable value of fitness.  The upshot is that to be truly successful, an organism has to pass on its genes to future generations, and spread them as far as possible.  So in that case, I can think of at least one 4 billion year end-product that’s not successful.  Cause as long as you keep chucking stupid internet memes at people, you’re never getting laid.

 

The Shrinking Ozone Hole: Could We Similarly Reverse Climate Change?

Let’s set the record straight.  The depletion of the atmospheric ozone layer and the climate change often referred to as “global warming” are separate, largely unrelated processes.  I remember a comedian maybe 10 years ago making (or should I say attempting) a joke about about discharging aerosol cans during especially cold winter days to hasten the anthropogenic effect on worldwide temperature increases.  Far be it from me to step on a punchline, but there are a couple things wrong with that.  First, the loss of ozone catalyzed by the chlorine atoms from chlorofluorocarbons (CFC’s), the propellants once used in spray cans, does not accelerate the Earth’s greenhouse effect; it’s the enormous amounts of carbon dioxide gas (and, less directly, methane) we introduce into the environment that pulls off that trick.  In fact, man-made ozone can actually act as a greenhouse gas itself when present in the troposphere, the lowest level of our atmosphere, although its impact in that regard is much, much smaller than that of the other offenders.  Ozone depletion instead heightens our risk for skin cancer, as the upper atmospheric layer of the compound functions as something of a shield against carcinogenic ultraviolet radiation from the Sun.

Secondly, the widespread use of CFC’s in consumer products ceased in 1994, thanks to the internationally instituted Montreal Protocol.  Even their less hazardous temporary replacements, hydrochlorofluorocarbons, will be phased out in much of the world by 2020 and will likely disappear entirely a decade later, to be supplanted by non-destructive hydrofluorocarbons.  Due to these burdensome but necessary measures, a study published in February shows that the Antarctic ozone hole has shrunk to its smallest size in 10 years, and some scientists estimate that stratospheric ozone levels should rebound to pre-1970 levels by the year 2050.  Now that’s a system workin’!  With such an unequivocal environmental success now precedented, could we take similar steps to mitigate that other dilemma, global climate change?  Let’s compare and contrast the histories and natures of the two phenomena to find out.

Chlorofluorocarbons are just what they sound like, compounds that contain only chlorine, fluorine, carbon and hydrogen.  Belgian chemist Frédéric Swarts pioneered their synthesis in the 1890′s, and some species were used as fire suppressants during World War II.  The compounds found greater commercial use in refrigerators and aerosol cans in subsequent decades.  University of California at Irvine Scientists Frank Sherwood Rowland and Mario J. Molina first suggested the devastating effects of atmospheric chlorine on ozone in 1974, after James Lovelock had discovered that nearly all the poorly reactive CFC’s ever released into the air were still present.  Unsurprisingly DuPont, a heavy producer of the chemicals and creator of the brand name Freon, called their work “utter nonsense.”  Rowland and Molina were proved right by laboratory experiments and the direct observation by James G. Anderson of chlorine monoxide, a product of the reaction, in the atmosphere.  For their research on the issue, the pair won the Nobel Prize in Chemistry in 1995.

The United States, Canada and Norway banned the use of CFC’s in aerosol cans in 1978, but not much other action was taken until the “holy shit” moment 7 years later.  That was when the British Antarctic Survey team announced massive springtime losses of ozone over the South Pole, “holes” that in later years would reduce the stratospheric presence of the compound by up to 70%.  The occurrence could have been detected much earlier, but NASA scientists hadn’t noticed the numbers, as values that extreme were automatically eliminated from their data, deemed “impossible.”  That’s how bad things had gotten in a short period of time.  The international community immediately took notice and the Montreal Protocol was opened for signature just two years later.  Atmospheric CFC concentrations continued to rise until the year 2000, due to the continued use of previously produced, non-compliant refrigerators and air conditioners, but as those units have failed over time, we’ve developed the much rosier picture of our future we now see, one that’s been painted by urgent action following smart science.

spray cansmokestack

We fixed one, now let’s work on the other.  Images from howstuffworks.com and Scientific American, respectively

The furor surrounding global climate change has been different in many respects.  One of the main disparities is the speed at which the problems have become grave.  Whereas we’ve been pumping large amounts of carbon dioxide into the atmosphere since the Industrial Revolution began in the late 1700′s, with only a gradual change in temperature, it was just a decade after the mass production of CFC-containing products that their chlorine molecules were shown to pollute the atmosphere, and the titanic damage to the ozone layer was recognized.  That made it easier to pin the effect to the cause.  Unusual warming was first noticed in the United States and North Atlantic in the 1930′s, but it wasn’t until 1960 that carbon dioxide levels were proved to be rising.  But climate is a much more complex monster, and there is more than one factor at play.

Other factors affect global temperature, and we know that climate has changed over geologic history, without our influence.  Volcanic eruptions, variations in solar radiation and the Milankovitch Cycles all play roles, but that really just goes to show how delicate the whole system is and how it’s easily nudged.  We now know that solar activity has actually decreased over recent years, and ancient ice cores have shown us that the correlation between temperature and the atmospheric concentration of carbon dioxide is very strong.  Better and better computer models of the chaotic system have been implemented as technology advances, and now a consensus of 97-98% of climate scientists agrees that global climate change is real, and produced by human beings.

WHAT DOES THIS MEAN?

It took us longer to prove human-induced change to the atmosphere in the case of global warming, and now that we’ve gotten there, the people who stand to lose out if measures are taken to halt it still deny, just like DuPont did into 1987.  But even if the oil companies and others were on board, what could be done?  The worldwide economy is much more dependent on the burning of fossil fuels, the largest contributor to the problem, than it was the use of fridges and spray cans.  And then a less harmful substitute was easily found, whereas the research into alternate energy sources is splintered and slow-moving.  Could we use some sort of agreement that forces us to drastically reduce our carbon dioxide emissions, akin to the Montreal Protocol?  I mean something actually useful, unlike Kyoto?  Would it even help?  Some predictions show that even if all carbon emissions stopped today, we couldn’t naturally return to a pre-industrialization climate for a thousand years.

That all seems like a major bummer, but the ozone depletion parable has shown us that as good as we are at wrecking our future, we can also find ways to solve the problems we’ve created.  It’s no longer in dispute that puny humans can affect serious changes, and now we also know that taking stringent and serious action can swing the pendulum back.  It’ll take brilliantly creative technologies, but it’s not our ingenuity that’s questionable.  It’s our commitment.  We’ve come together to fix our screw-ups before.  We’ll have to do it again.  Do we need a “holy shit” moment to get our brains and asses into gear?  Don’t brutal heat waves and superstorms fit that bill?

Much of the history of climate change research described here was taken from the wonderful American Institute of Physics resource at this location.

CSI Milky Way: Cosmic Ray Shooters Finally Identified

We are under constant attack and we cannot escape our assailant.  No matter where we hide, each of us is riddled with 30 hyper-speed bullets every second.  You can’t run, either.  We’d probably be dead by the time we got to the neighbor’s.  Don’t bother trying to call for help; the perpetrator also targets our communication satellites.  We’ve known of this unprovoked assault for ages, but only recently have we conclusively proved just who’s out to get us.

The electricity and ionization in the atmosphere was the first clue that something was up.  There was something very energetic disrupting our atoms.  Henri Becquerel’s 1896 discovery of radioactivity seemed to make this an open and shut case, as decay of heavy isotopes within the Earth took the blame.  International intrigue cast doubt on the culprit’s identity in 1909, when German physicist Theodor Wulf took an elctrometer to the top of the Eiffel Tower and found the levels of radiation there were actually greater than at the ground surface.  No one on the beat believed him.  The case grew cold.

Victor Hess, though, was unsatisfied.  He wanted to go higher.  in 1912 Hess used a hot air balloon to take three enhanced versions of Wulf’s electometers to a height of 17,000 feet, where they measured an ionization rate four times what you’d expect at sea level.  Wulf had been right.  The barrage was coming from beyond, not from within.  The Sun became the next suspect, but Hess was able to rule it out by performing the same experiment during a near-total solar eclipse, with the moon blocking much of the Sun’s radiation.  The tricky detective work earned him the Nobel prize in 1936, but it ultimately left us with more questions than answers.  What was causing the ionization, and who was behind it?

Robert Millikan had already worked his way up the ranks when he picked up the assignment in the 1920′s.  Coining the term “cosmic rays,” Millikan believed gamma rays were the offender’s ammunition of choice.  But the ballistics didn’t check out.  In 1927 J. Clay dared to question the respected veteran by pointing out that cosmic rays were more intense at higher latitudes, swept there by the Earth’s magnetic field.  They couldn’t be light waves like gamma radiation; they had to be charged particles.  Experiments in the ’30′s, spurred by Bruno Rossi’s insights, showed that cosmic ray intensity was also greater from the west, indicating the particles were positively charged.  The weapon had been found.  High-speed protons.  But what could accelerate the tiny projectiles to such velocities, as high as 95% the speed of light?

cosmic rayImage from (where else?) cosmicray.com

Fingering the perp continued to prove difficult.  The line-up over the decades included magnetic variable stars, nebulae, active galactic nuclei and more, but no single scofflaw could be picked out. Supernovae became the prime suspects, as the expanding envelopes from their explosions could possibly provide the power needed to boost the protons’ speed to deadly levels.  Charged particles get deflected by other matter as they rocket through space, however, making the locations of the shooters hard to pin down.  The forensics were hand-cuffed.  Technology had to advance to uncover the well-hidden tracks.

Beginning in 2008, Stanford University’s Stefan Funk and his team set up a four year stakeout with the Large Array Telescope of NASA’s Fermi Gamma-ray Space Telescope.  They focused their attention on two supernova remnants in the Milky Way.  As it turned out, gamma rays were the key after all. While the trajectories of the proton bullets themselves may be too tricky to track, their gamma ray by-products zip right through, unaltered.  In February of 2013 the group announced that the observed energies matched the predictions.  They had a positive ID.

WHAT DOES THIS MEAN?

Pop the corks and call the D.A. After a century of hard-nosed investigation, we’ve got our man.  It was a circuitous route to the truth that in a way brought us back to where we started.  It goes to show that when you start tugging on a tangled thread, you never know where it’ll lead.  Gut instincts and gumshoe hunches only get you so far if you don’t have the observations and technology to back them up, though.  Even seemingly intractable cases can be solved given enough progress of time and technique.

I do hope the judge goes easy on the sentence, though.  Despite their cosmic ray malfeasance, supernovae have a history of community service.  It’s thought that elements heavier than lead can only be produced in stellar explosions, meaning many of our most precious minerals wouldn’t be here without them.  Some even speculate that a supernova may have triggered the collapse of the dust cloud that formed the entire solar system to begin with.  What a crazy, mixed up universe we live in, where a progenitor can turn on its own creation.  I’m getting too old for this shit.

Older and Wiser: How the Age of the Universe Has Been Expanded and Refined Over Time

Isaac Newton was arguably the most brilliant scientist in history.  He was certainly unrivaled in his lifetime, during which he invented the reflecting telescope, developed his famous universal laws of gravitation (daring to unite the heavens and the Earth as governed by the same processes) and pioneered the use of mathematical equations in the scientific enterprise, constructing the realm of calculus in the process.  He also used the power of mathematics, and the Bible, to calculate that the Earth was created sometime around 4,000 BC.  Well, even the best can trip up.

This figure also assumed that the world had existed more or less as it is since that beginning, and that the great variations in landscape we see had been caused by catastrophic events such as floods and powerful earthquakes.  James Hutton proposed in 1795 that instead our great canyons and mountain ranges had been formed by the same processes of erosion and weathering we observe today, necessarily taking place over much vaster lengths of time.  Catastrophism thus gave way to gradualism, an idea further cemented by Charles Lyell, one of the pillars of geology.

So the Earth was much older than we thought, but how much older?  in 1897, Lord Kelvin assumed that our world began in a molten state and calculated it would need 20-40 million years to cool to its present temperature.  A helluva lot older than 6,000 years, but still not close by a longshot.  Kelvin didn’t know about radioactive decay, which contributes enormous amounts of heat that “fooled” him into thinking the Earth was younger than it is.  Thankfully we now understand the process well, so much so that we’ve used radiometric dating to finally get a good handle on the planet’s age, a whopping 4.5 billion years, a figure we still didn’t arrive at until the 1950′s.

Okay, okay, so the Earth has changed drastically over time, but the universe – now THAT’s eternal and unchanging!  Right?  I mean, that’s what Einstein thought.  He wanted the universe to remain static so badly that when his own theory of relativity showed that it must be expanding, he threw in a MacGuffin called the “cosmological constant” to fudge the numbers!  And if there’s one thing we’ve learned, it’s that the most brilliant scientist of his time can’t be wrong!  Wait…

Edwin Hubble refined the first numerical estimates for the universe’s expansion made by George Lemaître, and by combining that with the known distances to certain astronomical features, he arrived in 1929 at a cosmic lifespan of 2 billion years, a figure he himself called “suspiciously short,” as many stars seemed older than that and the geologists had already brought the age of the Earth to at least 3 billion years.  Better distance measurements to Cepheid variable stars and quasars in subsequent decades continued to raise the age of the universe, from 6, to 10, to 12 billion years.

In 2008, the Wilkinson Microwave Anisotropy Probe (WMAP) revealed the most precise appraisal yet, further aging the universe to 13.7 billion years.  And now the European Space Agency’s Planck spacecraft has used the same primordial quantum fluctuations to kick it a tiny bit more, for a final figure of 13.82 billion years.

planck-spacecraft-artist-concept

Thanks in part to the Planck spacecraft, the universe if billions of years older than it was a century ago

WHAT DOES THIS MEAN?

The continuing changes to a single piece of information may seem damning at first blush, but it’s really a monument to how beautifully self-correcting science is.  No ideas are sacred, no matter who came up with them,  and all positions are open to revision when new or better evidence is presented.  If the argument of an iconoclast is sound, it cannot be ignored.  Not all institutions are so democratic or flexible.

But then again, the wiggly, fiddly findings do also reinforce that science can only offer approximations of the way things are.  No one can provide true certainty, but through the advancement of ideas and technology, our approximations of reality become ever better and the picture of our vast, ancient universe becomes clearer.

March 2013 Revisit: UFO’s Regrow Teeth Using Facebook… Or Something

Three cool updates to close out March.  The first actually calls back to a February post, wherein WDTM? speculated that the stunning footage of the Russian Chelyabinsk meteor, shot from multiple sources, angles and locations, should make UFO enthusiasts queasy, as the modern ubiquity of worldwide camera technology has somehow not yet provided similar spectacular evidence of something they claim to be continually happening.  We found out in the February revisit, instead, that the Russian populace had gone gaga with woo-woo over the incident.  Oops.

Of course in science, one incident is not indicative of a trend, as shown in the newest issue of Intelligent Life magazine, from the publishers of The EconomistThe article, boasting the unfortunately confrontational title “Twilight of the Gullible,” highlights the morose musings from a November 2012 meeting of the British-based Association for the Scientific Study of Anomalous Phenomena (ASSAP), at which science writer Ian Ridpath explained that UFO sightings were indeed following a drastic downward trend, despite the fact that nearly everyone in the western world now carries a video camera in their pocket.  In fact, cases reported to the Association have dropped a staggering 96% since 1988.

UFOsWhere are the real versions of this photo?  Image from travelandleisure.com

When commenting to The Daily Telegraph prior to the conference, Sheffield Hallam University professor David Clarke echoed what we noted in the February revisit, that “[t]he reason why nothing is going on is because of the internet. If something happens now, the internet is there to help people get to the bottom of it and find an explanation.”  ASSAP’s chairman, Dave Wood, further explained that, “When you go to UFO conferences it is mainly people going over these old cases, rather than bringing new ones to the fore.”  Sound familiar, Bigfoot enthusiasts?  The same lack of progress that pegs a pseudoscience.

Coming back down to Earth, the future of human body part regeneration seemed comparably dreary, unless you count the flicker of hope provided by African spiny mice.  Well, the mice make nice again, this time with teeth!  In a March issue of the Journal of Dental Research, Paul Sharpe’s King’s College London team described their work in combining human gum cells with those from the molars of fetal mice to grow new teeth, roots and all.  Of course, the teeth are human/mouse hybrids and such procedures are far, far removed from clinical use, but hey!  It’s a start!

Finally, we tried to prove that Facebook is good for your brain, or at least that the WDTM? page is.  The research of Jeff Hancock and Catalina Toma, published in the March issue of the Personality and Social Psychology Bulletin, seems to support that assertion.  The authors tested how experiment participants reacted to negative feedback and found that those who soon after checked their Facebook profiles became less defensive.  This is contrary, however, to other studies published about a year ago that argue reading other people’s positive status updates can make a person feel worse about themselves.

 WHAT DOES THIS MEAN?

The truth may be out there, but UFO believers might not want to hear it.  You shouldn’t dump your dentures in the hopes of getting rodent replacements anytime soon.  And, as always, Facebook is a tool whose benefits and detriments will depend on how you interact with it.

April starts off with a look at just how the hell astrophysicists can figure out the age of the universe, to be followed later in the month with a discussion on whether cloning can bring long last animals back from extinction.  Join the conversation in the comments section or on Facebook!  I swear it’s okay!

Science vs. Pseudoscience: Bigfoot Teaches Us the Difference

After a brutal family engagement Saturday evening, I decided to plop down on the couch, pop a couple cold ones, and find some mind-numbing entertainment.  Bigfoot shows never disappoint.  Destination America, a Discovery Channel station, brought me a delicious delight named “Southern Fried Bigfoot,” an independently produced documentary on the Sasquatches seen south of the Mason Dixon line, such as Florida’s Skunk Ape, Louisiana’s Honey Island Swamp Monster, and the Boggy Creek Creature of Arkansas, the last of which inspires stories so silly as to be lampooned on Mystery Science Theater 3000.  You can’t expect much scientific rigor on these programs, and this tasty treat was no exception, as the evidence presented encompassed recalled anecdotes recorded (for some reason) with night vision cameras, and the irrefutable proof of the smell of a wet deer (in the woods!  Impossible!).

That’s good fun, but what really twisted my yambag came near the show’s conclusion, during the required “why we still believe” segment that always seems to bookend these things.  The quote may not be exact, as my mind had been partially muddled by a 9% Sierra Nevada stout at this point, but the music swelled and one guy said something to the effect of, “No one can prove that it doesn’t exist, so that gives me a leg up in believing that it does.”  Sorry hoss, but that’s not how it works.  You’re right; it’s virtually impossible to prove something’s nonexistence, that’s why the burden of proof is always placed on the person making the claim.  In science, one typically starts with the null hypothesis, the idea that nothing strange or different is going on, a stance that can only be rejected when sufficient evidence to the contrary is obtained.

The so-called best evidence for the beast’s existence has been refuted innumerable times, perhaps no more succinctly than in Daniel Loxton’s two part summation in the pages of Junior Skeptic, of all places, so I won’t rehash it here.  It’s the lack of answers for certain questions that expose the endeavor as a field that is simply not concerned with determining the actual truth.  Why is there no fossil evidence of apes in North America, and why are Bigfoot carcasses never found?  What about scat?  Are mommy and daddy Bigfeet curbing their kids?  Considering that, as super skeptic Ben Radford has pointed out, there must be tens of thousands of individuals to provide a sufficient breeding population, why are they not seen more often?  Why has a rabid Bigfoot, not in control of its faculties, never broken the treeline and wandered into a neighborhood?  In a country where wolves were nearly wiped out due to their impact on livestock, why has a starving Sasquatch never been caught nabbing a farm animal?

“He knows that’s our food.”  Hand to God, that was the answer of Matt Moneymaker, head of the Bigfoot Field Researchers Organization (BFRO) and star of Animal Planet’s ratings juggernaut Finding Bigfoot, when asked that last question by the program’s token pseudo-skeptic, Ranae Holland.  Even she rolled her eyes at that one.  The group’s official website further betrays them, as it asserts the BFRO to be “the only scientific research organization exploring the bigfoot/sasquatch mystery,” while claiming in the “About” section, “It has always been the policy of the BFRO to study the species in ways that will not physically harm them.”  You can’t presuppose the existence of something unverified and call yourself “scientific.”  You can’t dismiss the null hypothesis with way-out, illogical answers and substandard evidence like a few eyewitness reports and potentially misshapen or fabricated footprints.  There’s an old aphorism in science about doing everything you can to sink your own ship, so that you know it’s sturdy.  Bigfooters prefer to ignore the gaping hole in the hull and play on like the orchestra of the Titanic.

 WHAT DOES THIS MEAN?

Despite the purported desires of the people involved and the use of technical sounding jargon and fancy instruments to lend a feigned air of sophistication, Bigfoot “research” is simply not science.  “But what about the Melba Ketchum DNA study released in February?  That was published in a scientific journal!”  Yeah, a journal CREATED by the author because no one else would accept it!  (Sharon Hill at Doubtful News has been all over this one)

Bigfoot teachesEven Harry knows it don’t add up.  Awesome image from drawception.com

A “pseudoscience” is defined as a claim, belief or practice which is presented as scientific, but does not adhere to a valid, scientific method.  You can spot a pseudoscience by its lack of openness to testing by other experts (as with the Ketchum paper), an absence of progress (still no body?) and, as seen in the examples here, an over-reliance on confirmation rather than refutation.  “He knows that’s our food” and other similar, bonkers assertions show that folks who follow the ‘Foot are not looking to find out *IF* it exists, but are out to prove *THAT* it exists.  Real science fits the theory to the evidence, not the other way around.

Keep these things in mind the next time “UFO’s Abducted My Grandma” or “The Bermuda Triangle Causes Global Warming” comes on the tube late at night.  With a strong nightcap, those shows can be entertaining, but the trappings surrounding them are anything but scientific.

WDTM? Comes to Facebook. Helping or Hurting?

The official What Does This Mean? Facebook page went up yesterday.  What does that mean?  Well, it means that all these blog entries will be posted there for easier access, along with other interesting science-y tidbits.  But really… is it part of the solution or part of the problem?  Is there a problem?  What effects do the the modern super social connectivity and the Internet’s ability to access nearly all knowledge with a few keystrokes have on our minds and brains?  Try to avoid reading in an F-pattern and we’ll see.

One of the Internet’s first functions (besides distributing free porno) was to share information between UCLA, the Stanford Research Institute, UC Santa Barbara and the University of Utah.  There’s a lot more out there now than just the goings-on at four different institutions, so much so that people don’t even feel the need to remember anymore.  A 2011 study led by Columbia University psychologist Betsy Sparrow showed that experiment participants were less likely to recall presented statements if they were told the information would be accessible to them later on, and that they were more likely to remember where to find the information rather than the content itself.  As we begin to treat the web as our own external, mental hard drive, 84% of us can’t bear to be without our smartphones for even a day.

Okay, we’re in the middle of the “F.”  Still with me?  Good, because you should know it’s not just the trivia questions that keep us clicking.  It’s chemical.  Dopamine, the neurotransmitter in our brains that causes us to seek out satiation, whether that be from food, sex or text, kicks into overdrive with the instant gratification of every e-mail and info nugget we receive, spurring us to want more and more as each desire is rewarded.  Why is it never enough?  Stasis is the bane of evolution and survival.  A happy organism is a complacent one that doesn’t strive or fill new ecological niches.  The chase really is better than catch, and now it can be repeated almost endlessly.

And don’t fool yourself into thinking you can finger your phone and get your work done at the same time.  Research has shown that performing two or more tasks simultaneously or switching back and forth from one thing to another can reduce productivity by up to 40%, and that it may make distractions harder to tune out, leading to mental blocks.  In fact, the people who identify themselves as heavy multitaskers actually perform worse at certain mental plasticity tests, according to a University of Utah study.  The dopamine connection is evident here again, as those same people tend to be the most impulsive and sensation-seeking.

WHAT DOES THIS MEAN?

Best throw out your phone and nuke your Facebook.  *Wait, NO! *  Um, there are tangible advantages from social networking… yeah.  It appears that the benefits derived from traditional social groups, such as the propagation of desirable behavior and the feeling of belonging, are still present in online networking.  And it makes your brain bigger!  Well, Robin Dunbar, the anthropologist regarded for his assertion that a person can never really “know” more than a 150 people (so start trimming those friend lists), discovered with his colleagues that the size of a person’s social network is directly related to the volume of the orbital prefrontal cortex, a part of the brain involved with decision-making.  It is currently unknown whether accepting those requests actually jacks up your gray matter or if folks so gifted are just naturally gregarious and friendly.

273774-facebook-brainOrwellian image via pcmag.com

Hey, bottom of the “F!”  Not that I’ve got your attention back, it’s probably trivial to point out that the Internet and social networks are simply tools that can be used beneficially or can cause harm, just like anything else.  Understanding how we think and why fancy gadgets sometimes lead us down an unproductive path can help push the needle further toward the former.

 

The Morning After St. Patrick’s Day: Am I Gonna Die or Am I Stronger than Ever?

patty drink

Holy shit, what time is it, man?  St. Paddy’s was awesome, but I’ve gotta get to work.  And I’m still wasted!  FUCK!  Quick, get me some of those enzyme nanocomplex pills developed by UCLA researchers that have been shown to lower blood alcohol content and reduce liver damage.  What?  It’s only ever been tested on mice and isn’t yet available for human use?  SHIT!  Well, at least Mickey D’s should help with the hangover.  Stop laughing!  The cysteine in the egg of my “McMuffin” breaks down acetaldehyde, the toxic byproduct of alcohol metabolism that can cause headaches and vomiting, and the fructose in the OJ will help replenish the sugars I pissed out, mitigating fatigue and loss of coordination.  And if I can score some fries, I’ll get back some sodium and potassium, electrolytes needed for nerve and muscle function.

I feel like hell, but at least I can rest assured that alcohol doesn’t actually kill brain cells.  In fact, there may be surprising health benefits from alcohol consumption!  Small amounts of ethanol itself extended the lives of nematodes from 15 days to 40 days in a recent UCLA study.  Yeah, I guess it must be an awesome party school.  Alcohol can also reduce your risk of developing heart disease, by up to 25%!  And all the soluble fiber in that Guinness we drank will help lower our LDL cholesterol.  The hops can slow the release of bone calcium, limiting kidney stones.

Maybe we should switch to wine, though.  The resveratrol from the grapes may further extend lifespan.  There are also hundreds of reports that it may protect against cancer, dementia and Alzheimer’s disease.  Even hearing loss!  And wine’s purported ability to stave off colon cancer is a nice boost after the fast food, amirite?!

WHAT DOES THIS MEAN?

Shit, we should start drinking more.  No?  Whaddya mean, the hazards outweigh the benefits if you have more than two drinks per day?  And that while alcohol doesn’t kill brain cells, binge drinking may decrease the production of new neurons in the hippocampus by up to 40%?  Wait, almost no human, clinical trials of resveratrol have been conducted?  If the implications of the animal studies hold for people, we’d have to drink upwards of a thousand bottles of wine a day to receive those benefits?  You’re a fucking buzzkill, ya know that?  I’m drinking alone next weekend.

I know that bananas would be better than the french fries!  JESUS!

Mighty Marvels of Regeneration: Could We Heal Like Comic Book Characters?

The best science fiction and fantasy is rooted in reality.  Arthur C. Clarke’s classic 2001:  A Space Odyssey showed what could happen if our current computers developed human-like intelligence and emotion.  Michael Crichton’s Jurassic Park captured the imaginations of kids and the young-at-heart with a seemingly plausible parable on how to resurrect long-extinct dinosaurs (look for a post on this possibility in the future).  And as in Marvel’s Fantastic Four comics, the universe’s planets are often threatened with consumption by a peckish giant in a purple skirt.

Okay, maybe scratch that last one, but what about some of the superhuman feats of our more grounded champions?  Comic characters get beat up a lot, yet they always seem to come back for more merely a month later.  While a healthy suspension of disbelief is required to account for most of that, some of our favorites have built-in mechanisms to explain their near-miraculous recoveries.  How realistic are these regenerative capabilities?  Might they even translate to human applications?

The world famous Wolverine, star of the breakthrough X-Men movies as well as his own solo venture (with a sequel on the way), is the king of stitching himself back together.  His mutant healing factor rapidly regrows enormous amounts of tissue, at one point in his serial even regenerating an entire body around his metallic skeleton after the explosive villain Nitro seared away his flesh.  After his amazing ability was supercharged, he was even able to bring himself back from a single drop of blood!

wolverine skeletoncover image of Wolverine (Volume 3) #48

Such stunts will never be within our grasp, but the concept itself is not unheard of.  Planarians, commonly called “flatworms,” are simple critters less than an inch long that typically live in ponds and rivers.  They themselves are famous for coming back from extreme situations, spawning multiple complete organisms when cut into pieces.  In one mind-blowing study, a planarian was irradiated so that none of its cells could reproduce and it would slowly die.  A single, solitary c-Neoblast (an undifferentiated unit akin to stem cells) was transplanted from a donor into the victim’s tail, and subsequently grew all the former tissues back to create a new, functional animal!  Too bad people are not planarians.

Sure, coming back from a single cell or a little bit of blood is literally incredible.  How about something simpler?  Like, I don’t know, decapitation?  Wolverine’s final foe in the X-Men:Origins film was the regeneratin’ degenerate known as Deadpool, a product of the same super secret government program and perhaps the only dude bad enough to rival our hero in the healing department.  The creep’s head was shown to still be conscious after being removed from his body, a fate the comic book counterpart has suffered on numerous occasions, proving it to be little more than a minor inconvenience.

deadhead

The many-headed, mythologic Hydra would regrow two heads for every one lopped off, and the tiny creature for which it’s named is not far behind.  Composed of a basal disc used to adhere, a tubular body and a mouth opening surrounded by thin tentacles, the bitty beast will actually regrow its “head” when lost, thanks to constant mitosis (cell reproduction) in the body.  If a hydra is chopped up in a blender (who came up with that experiment?), a centrifuge can be used to reaggregate it and bring it back to life, much like Deadpool returned from being smashed to bits by the sinister Iceman in “Uncanny X-Force #16″.  I wouldn’t try this one at home.

All right, all right, no one’s expecting that we’ll ever be able to regrow a head or our entire musculature, but something like limb regeneration seems just feasible enough.  So much so that Dr. Curt Connors, an ordinary scientist, tried to restore his departed right arm with a serum inspired by reptilian recuperative tactics.  The treatment succeeded, but side effects included skin irritation, spontaneous tail appearance and a beatdown from the Amazing Spider-Man.

the-amazing-spiderman-movie-lizard-poster

Similar to a scene in the cinematic adaptation of 2012, there are lizards called skinks whose tails snap off when grabbed by predators, allowing the animal to escape.  Amphibians are better known for regrowing limbs, but the potential for human use recently took a less optimistic turn.  It has long been thought that such recuperation was a skill developed early in evolution, and that the ability had been “switched off” in mammals and birds.  New studies of the red-spotted newt, however, show that many of the newt’s RNA transcripts that code for proteins used in the process are unique to the organism, i.e. not found in other things like us.  That innate ability may just not be there for people, and no magic potion is likely to instill it.

WHAT DOES THIS MEAN?

You should take any story of human regeneration with a grain of salt.  In 2008, hobby store owner Lee Spievack claimed to regrow a lost fingertip by applying a powder derived from pig bladders, an assertion called “junk science” by University of Leeds professor Simon Kay, adding that, “If you could regenerate body parts like this, your first port of call would be a serious science journal like Nature because it would be a Nobel prize winning revolution.”  A similar if not as spectacular story was reported by Californian Deepa Kulkarni, but closer examination suggests it was the proper dressing of the wound to prevent the growth of scar tissue that restored the finger’s appearance, and not a sprinkling of “pixie dust.”

But perhaps the cause is not completely lost.  African spiny mice were found in 2012 to have brittle skin that tears off when attacked; skin they can regenerate complete with hair follicles and sweat glands.  The regrowth begins from a clump of cells comparable to the blastemas employed by salamanders.  People aren’t mice, either, but at least mice are closer cousins than a minuscule, glorified gut tube.  The precedent is now there in mammal physiology, so that one day we may learn how to become superhuman.  Ya know, like a lizard.

Killer Sinkholes: Could It Happen to You?

The geologic term “karst” is used to refer to a characteristic type of topography in regions underlain with carbonate (typically limestone) rocks, which chemically weather to produce hilly features or depressions at the surface and sometimes spectacular caves beneath, such as Carlsbad Caverns in New Mexico and the Mammoth Cave system in Kentucky.  Beauty can turn to tragedy, however, if those chambers collapse to form sinkholes, as in the sad and surprising story of Jeffrey Bush, whose Florida home was partially swallowed by such a structure while he was sleeping on the evening of February 28th.  Mr. Bush unfortunately did not survive the experience, which may lead some to wonder, could it happen to me?  How dangerous is this phenomenon?  Well, it depends on where you live.

Sinkhole map

The above map, developed by the United States Geological Survey (USGS), shows a generalized distribution of karst features within the United States.  Why is Florida so riddled with cavities?  The state’s land mass was underwater for much of its history, so that most of its basement rock is composed of limestone (chemical formula CaCO₃), formed by the deposition and lithification of corals and the hard parts of other marine organisms.  Those limestone formations, hundreds or even thousands of feet thick, were then covered with much thinner layers of sand and clay thanks to the erosion and transportation of material from the Appalachian Mountains.

Rainwater can pick up carbon dioxide (CO₂) as it passes through the atmosphere and then the soil, forming a weak carbonic acid (H2CO3) that can act to dissolve the limestone bedrock.  The process is exacerbated by man-made acid raid, which often comes in the form of the more reactive hydrochloric acid (HCl).  Hydrochloric acid’s reaction with calcium carbonate is so violent that introductory geology students use it to help identify limestone and related rocks.

Of course the diagnostic solutions are more highly concentrated than what falls from the sky, but it can help you imagine what goes on beneath your feet.  The effects of pollution from the industrialized mid-west are felt on the east coast thanks to the jet stream.

Sinkholes are then more common after rain events, but they can be produced by droughts as well, as in the Bush case.  Florida State Geologist Jonathan Arthur notes that dry conditions can cause overlying soil to collapse.  Human extraction of groundwater can have the same effect, as 65 sinkholes appeared in Florida after overuse by strawberry farmers in 2010.

WHAT DOES THIS MEAN?

About 20% of the United States is underlain by karst terrain, making it susceptible to subsidence and sinkholes.  While they often appear quickly and without warning, basic geologic principles tell us that past is a good prediction of future.  Just as Californians know to always expect earthquakes, spots currently plagued by sinkholes will likely continue to endure the uncertainty.  Know your local geology and what potentially hazardous effects you might be able to expect.  If you live in an area prone to sinkholes, learn to note the signs, such as muddied well water, new ponds, or slumping features.

sinkhole1

More generally, we have to continue our staggeringly slow realization that what we do affects the world and environment around us.  Acid rain doesn’t just kill a few fish; it can destroy property and even take human lives.  Overuse of resources not only puts strain on the local ecology but could do irreparable damage to our neighbors.  When couched in more practical terms like that, maybe stewardship isn’t quite so hard to swallow.  And maybe something good can come from otherwise senseless deaths.