Monthly Archives: April 2013

April 2013 Revisit: Super Coelacanths from Outer Space… or Something

Some quick hits before the main event.

Way back in February WDTM? touched on the idea of bacterial antibiotic resistance, what it means to people, and why it’s spreading.  As it turns out, it means something to sea life too, as the resistant strains produced by our irresponsible behavior are finding their way into the oceans and sickening the animals there, as reported in the May issue of Scientific American.  The ugly cycle can complete if we eat seafood tainted with the superbugs or when oceangoing people develop hard to treat infections if wounded.  Circle of life, I guess.

In a meeting this month of the American Physical Society, Shawn Bishop, of the Technical University of Munich, Germany, described preliminary research of new forensic techniques to confirm past offenses of supernovae trying to gun us down.  He was able to identify iron-60, an isotope with a half life too short for any from the birth of our solar system to still be present on Earth, in fossilized ocean-dwelling bacteria that use iron-rich magnetite to navigate.  Bishop found high concentrations in the bacteria from about 2.2 million years ago, leading other researchers to suggest a known supernova from this time period in the Scorpius-Centaurus stellar association as the culprit.  Busted.  Fortunately, at over 400 light years away, the perp wasn’t near enough to give as both barrels or, as Shawn himself put it, “That we’re here talking about it means it wasn’t too close.”

If you and I aren’t the pinnacles of evolution, then what about the “living fossils” that haven’t changed for ages?  They must be doing something right.  Check that, they are still changing.  A fish called the coelacanth is perhaps the most famous example, as it was only rediscovered in the waters off South Africa in 1938, after being thought long-extinct.  As seen in April’s issue of Nature, the prehistoric beast’s genome has now been fully sequenced and it reveals that while many of its genes have been slow to change, perhaps due to a lack of selective pressure deep in the ocean where it lives, a large number of non-coding parts seem to be moving around.  Although the role of these bits in shaping physiology is not really clear, we now see that even after 4 billion years of evolutionary success, the famous fish still isn’t finished.

coelacanthThe flashback of the seven seas, via

And now the unpleasant elephant in the room.  Climate change is real.  What can we do about it?  It’ll be a lot more difficult than fixing the ozone hole, and cooperative agreements with other nations may not be enough.  Lowering emissions and reforestation likely won’t turn the tide.  We’ll have to develop ways to recapture carbon dioxide as it’s produced.  It would be nice to be able to remove it from the atmosphere and reuse it, but we may have to settle for sequestering carbon dioxide deep underground in geologic formations.  New technologies for removing carbon dioxide from the atmosphere continue to be pioneered, but they may never be economically feasible.  Some in the past have suggested the stimulation of plankton growth to naturally help, but recent volcanic observations suggest that seeding the ocean with iron won’t get the job done.


Unfortunately, in the near term, we may have to be ready to adapt to climate change before it can be completely counteracted.  We’ll have to develop engineering solutions to deal with floods and superstorms, and agricultural innovations to continue food production.  Don’t expect our “successful evolution” to come to the rescue;  people are not birds or sea urchins.  The cost of such measures is sure to be enormous.  As we deal with the new world we’ve precipitated, perhaps the best lesson is that future consequences need to be considered for all our actions.  We don’t need another global shift while we’re cleaning up the mess we already have.

A Neophyte Visits NECSS

With an extra Monday to play with this month, I’ve decided to do two separate “Revisit” posts.  The usual  look back will happen on the 29th.  This entry focuses on recounting a real life, three dimensional event, as I attended the 5th annual Northeast Conference on Science and Skepticism (NECSS) in Manhattan on April 6th.  NECSS is co-promoted by the New England Skeptical Society, founded in 1996, and the New York City Skeptics, a group not much older than the conference itself.  The first NECSS took place in 2009 and boasted 400 attendees, a number that has only grown, if my shaky estimation skills can be trusted.  While the lectures span an entire weekend, 2013 was the second year in a row that I could only make the Saturday session, as other social engagements unfortunately overlapped.  I missed the Sunday in 2012 because… well, I had only found out about the damn thing shortly before it was to occur and I had no idea what I was getting into.  That and I’d have been in the doghouse had I completely blown my girlfriend off to dork out for two days straight.

When that day concluded last year, I was kind of shaken.  The legendary conjurer and debunker of flim-flam, James “The Amazing” Randi had given the final talk of the afternoon, concerning his unequivocal expose of despicable televangelist Peter Popoff, a tale recounted in Randi’s famous tome The Faith Healers.  Tears welled in his eyes as he told the stories of disabled children who, looking to be made well by his divine guidance, left Popoff’s revivals no healthier (but more destitute), as the money-hungry swindlers cackled over the radio connections they used to pump the crowd for seemingly impossible to attain information about their ailments.  I too became misty, but the sadness turned to revulsion when Randi revealed that despite his irrefutable recordings of Popoff and his crew hoaxing his revelatory, “God given ability,” the charlatan was experiencing an inexplicable resurgence in popularity.  Randi implored that we all band together to combat such debilitating deceit, and I was inspired to agree.  But what could I do?  I carried that question with me all year, and into the 2013 edition of NECSS.


The event’s activities had expanded since 2012 to include a series of workshops the day before the conference proper.  The first was led by Julia Galef, still a member of the New York City Skeptics board of directors in addition to her position as President of the San Francisco based Center for Applied Rationality (CFAR), who offered a helpful presentation on her work with the Center and their goals.  She gave several suggestions on how to keep your reason about you during a discussion, such as dissociating yourself from your argument so as to not react defensively, and she further explained how CFAR teaches these skills through seminars.  When Julia finished, I wanted to ask what kind of people signed up for these sessions, but at least two others beat me to the punch, amidst many more thoughtful queries.  I guess we’re a boisterous bunch.  The answer was not unexpected; that the participants were usually leaning toward those tendencies already.  While it’s nice to get us all on the same page and sharpen our own skills, I think we all left considering how to bring in the people who could most benefit from those techniques.

NECSS officially began on Saturday morning with a presentation by physicist Leonard Mlodinow related to his most recent book, Subliminal:  How Your Unconscious Mind Rules Your Behavior.  Mlodinow’s prior popular works include a joint venture with Stephen Hawking, and another with Deepak Chopra of all people, whom Leonard playfully apologized to while looking skyward when experiencing technical difficulties with his talk.  My personal highlight here was the playing of the infamous Led Zeppelin SATANIC BACKWARD SPEAK~!  I had never actually heard it, so it just sounded like gibberish until the “lyrics” that some nutbar pulled out of the aether were shown alongside the sounds.  Suddenly, I couldn’t NOT hear them!  We’re designed to find patterns, not ignore them!  Again, another helpful lesson that drives home how fallible we all are, partly due to our biology, but one still likely destined to not reach beyond the already sympathetic audience.

During the break for lunch, I perused tables set up outside the auditorium that were helmed by groups such as the Center for Inquiry and the James Randi Educational Foundation.  There seemed to be at least twice as many in 2012.  Maybe it was a smaller room then.  More psychological effects.  I spoke to the kind gentleman at the New York City Skeptics table and asked what sort of things the organization was involved in.  He told me of lectures and meet-ups of fellow skeptics (often involving alcohol).  I’m a fan of the creature myself, so that’s cool, but not exactly what I was looking for.  “Do you guys do any kind of community outreach?”  I think he was unsure of what I wanted him to say.  I’m unsure of what I expected.  Steven Novella, neurologist and prime mover of the wildly popular and hilarious Skeptics’ Guide to the Universe podcast, was also asking the poor guy if anyone had found out the Wi-Fi password yet.  More technical problems.  Not wanting to distract him from his actual work, and realizing that we had probably reached an impasse anyway, I ceased to pursue the issue.

It seemed to me the most important points were raised in a panel discussion that included Michael Shermer, Editor-In-Chief of Skeptic Magazine, Mariette DiChristina, who holds a similar position at Scientific American, and Cat Bohannan, who studies how narrative influences cognition at Columbia University.  Led by the “Science Babe,” Deborah Berebichez, the group discussed how people tend to (wrongly) be more influenced by stories rather than data, and how the “other side” uses that to great effect while the good guys can sometimes overlook such a utility.  Debbie’s frustration was shared by everyone as she noted how many skeptics struggle to make ends meet while Deepak Chopra rakes in millions (Deepak took a beating this day).  Another member of the panel, Nathalie Molina Niño, suggested abandoning the war metaphors that we “take into battle” with us, to lessen the appearance of confrontation.  We’re all on the same side in wanting to figure out the truth, after all, we just approach it (not “attack” it) from different angles.  Shermer remarked that offering the story of how you yourself came to skepticism can often elicit empathy from suspicious adversaries listeners.  While some think the facts should speak for themselves and almost see such “framing” and storytelling tactics as cheating, I wonder if using the system against itself isn’t the worst idea in the world.


So that’s my story.  My 2013 NECSS experience was even more enjoyable than the previous year’s, and I got a further glimpse at just how hard the volunteers and everyone involved work to make this happen.  It’s clear by the crowd size and the technical problems however that as a group, we skeptically minded folks who ask for evidence of everything are still a niche group (though I guess even the most prestigious academic conferences still experience computer glitches, so maybe that point’s moot).  But the crowds do indeed seem to be growing and everyone’s enthusiasm was undeniable.  I witnessed how brilliant people are doing great things, but left still contemplating just how it is we can reach the folks outside our little circle.  Are these incremental gains enough?  Is there a way to make bigger breakthroughs?  I guess those are questions everyone else has been carrying with them forever, and ones we’ll all likely continue to shoulder for years to come.

“They needed a study for that?!” Common Sense Ideas Are the Ones Most in Need of Testing

Go to the WDTM? Facebook page and take a gander at the cover photo.  There’s a quote (sort of) from Thomas Henry Huxley that states, “Science is simply common sense at its best.”  What he really said is probably closer to “science is nothing but trained and organized common sense.”  A lot of people, scientists in particular, would disagree with that.  Einstein (maybe) said, “Common sense is nothing more than a deposit of prejudices laid down by the mind before you reach eighteen.”  Albert and others might argue that common sense, defined by some as “sound judgment derived from experience rather than study,” is the very thing that science strives to correct, as anecdotes can’t compare to the predictive power of statistics.  Then again, Merriam-Webster calls common sense “sound and prudent judgment based on a simple perception of the situation or facts.”  A considered conclusion achieved through observation.  Sounds like science to me.  So who’s right?  Are the two ideas compatible or mutually exclusive?  If we already have a common sense idea about something, what’s the point in further studying it?

It’s neat that Huxley, often called “Darwin’s Bulldog” for his aggressive defense of natural selection to anyone who would listen (and more importantly, to those who didn’t want to), is mired in this divide, as the theory of evolution itself is a prime example of how common sense can shift.  Before descent was discovered, it was only too obvious that living things have always existed in their current forms.  I mean, have you ever seen a turtle turn into a gopher?  Chickens have baby chickens, right, not baby cows?  Now, even ignoring genetic data, the evidence for evolution seems as plain as the nose on your face.  Or the similarities of animal skeletal structure.  Or the fossil record.  Or antibiotic resistant bacteria.

So it’s of the utmost importance to test everything, even the ideas that seem self-evident on first blush.  Especially those.  Some of the deepest truths of the universe are so violently counter-intuitive it may seem wondrous that we ever arrived at them.  It’s clear from our everyday experience that the Earth is stationary and flat, and that the Sun revolves around us.  You can watch it rise and set, for Christ’s sake!  And the deeper our understanding gets, the less capable our hunter/gatherer neural networks are of truly fathoming reality.  We’re built to know that if a rabbit starts running from Point A to Point B, we can intercept it in-between for a tasty snack.  It doesn’t disappear from one place and reappear in another.  Like electrons do.  Our built-in common sense simply can’t equip us for cerebral situations beyond mere survival, because there’s never been an evolutionary advantage to do so.


The learned common sense passed down by our parents, or gleaned throughout our own lives, is easier to analyze.  If it seems like most of such ideas do hold up to critical scrutiny, then what’s the point?  Looking through LiveScience’s recently published list of “The 10 Most Obvious Science Findings” might leave you wondering who would dispute that exercise is good for you or that marijuana impairs driving performance, but they’re out there.  And you know what?  We’re better for it.  Public safety issues and possible policy decisions shouldn’t be the subject of simple “just so” stories.  Common knowledge also once held that plenty of red meat is good for you and that smoking’s harmless.  While much of traditional wisdom may end up vindicated in the end, it’s worth it to weed out the stinkers.


I tend to fall on Huxley’s side, but I’ll even take it a step below organization and training.  At its heart, science is just a guy saying “show me.”  This pill lowers cholesterol with fewer side effects?  Show me the numbers.  Bats can use sound waves to detect prey?  Show me how.  It should go without saying that you need to see the goods before you accept anything, rather than just taking someone’s word for it.  If you wouldn’t buy a used vehicle without a Carfax, you shouldn’t buy into an idea without evidence.  That’s just common sense.

Next Thing You Know, They’ll Take My Thoughts Away: Can fMRI Read Your Mind?

Big Brother is watching you.  As cameras and information tracking become more ubiquitous in society, our definitions and expectations of privacy must necessarily adjust.  I mean, you could just drop off the grid, but then you wouldn’t get Facebook ads for that callous removal device you so desperately need.  How did they figure that out, anyway?  It may seem eerie and borderline telepathic, but the intrusive suggestions would probably stop if you could keep from Googling “ugly foot skin” every day (it’s okay; no judgment here).  But hell, then maybe they’ll just read your mind for real!  The possible application of Functional Magnetic Resonance Imaging (fMRI) to consumer research has been investigated since at least 2007, and a flurry of recent studies on the emerging method have shown its powerful and perhaps frightening ability to peer into our minds and deduce our thoughts.  Has technology breached the final Orwell firewall, or can we tune out the transcranial surveillance?

Let’s first figure out what fMRI is and what it actually does.  Magnetic resonance imaging, a technique made possible by the positive charge of the protons in our bodies’ hydrogen atoms, is typically used to identify abnormal tissue (such as tumors) and holds great advantage over X-rays and CT scans in both resolution and safety.  While traditional MRI can be focused on any particular part of the body, the “functional” version is trained solely on the brain, and is so named because it uses observations of blood flow therein to determine which parts are active at a given time, although that connection isn’t exactly straightforward.  Still, there are some startling, tangible test successes that are hard to argue away.

In 2008, Jack Gallant and his team from the University of California at Berkeley mapped the brains of subjects while they viewed random images from a set database.  Armed with this blueprint, computer programs were then tasked with matching the picture to the neuronal activity when the subjects were shown the photos again.  The program didn’t always pick right, but it could get at least the basic structure down and choose similar photos.

brain-images-fmriimage courtesy of

Spooky, huh?

A similar study by the same group in 2011, this time with video, took it a step further.  After building a brain “dictionary” with the help of hours of clips, the computer model was told to reconstruct from scratch what the subjects saw as they were shown random, never-before-used youtube videos.  Check out the results for yourself and see how well they did.  If that’s not invasive enough for you, an experiment from late last year even applies the same kind of technique to reading people’s dreams.

If advertisers try to utilize fMRI, can the long arm of the law be far behind? In a paper published this month, Anthony Wagner and his team describe how they used digital cameras to take 45,000 pictures of a person’s life over a several week period.  When shown their own photos and those of the other participants while under the watchful eye of the fMRI, researchers were able to distinguish whether the person remembered the image 91% of the time, effectively confirming what they had done in the past.  Is there hope such a procedure could be used in criminal cases to place someone at the scene of a crime, or to show a suspect has knowledge only the particular perpetrator could have?


Before you break out the tinfoil hat and line your living room with lead, look at what all those “mind-reading” examples have in common.  The people who had their thoughts pinched had to first submit to having their brains observed during a certain activity so that the computer programs would know what to look for in the future.  You can’t zap the memories out of someone’s head without them first lying still in a big metal tube for long periods of time.  That’s a little easier to avoid than Facebook.

But what if the court orders it!  Present-day polygraphs are inadmissible evidence in the U.S., and the forecast for fMRI lie-detection doesn’t seem any better.  Anthony Wagner’s further studies show that if a person tries to fool the system by intentionally thinking a familiar scene is foreign (and vice versa), an examiner is no more likely than chance to discern the truth.  The Fifth Amendment really is inalienable!

While we continue to see that fMRI technology can be used to guess general conditions, as with a new study that allows observers to know when a person is in pain, your particular thoughts and memories remain off limits for the time being.  Unless of course you just cant resist sticking your head in that tube.

What’s the Big Deal About Genetically Modified Organisms (GMO)?

It’s a topic that continues to make headlines and draw visceral reactions.  California’s much discussed Prop 37, a measure meant to require retailers and food companies to label products made with genetically modified ingredients, was dealt a high profile defeat in November of last year.  Similar legislation has failed to pass in other states, though Oregon currently bans the production and import of genetically modified salmon.  Why the concern?  While GMO companies like Monsanto have undeniably done some shady business dealings, does that affect the safety of the products or the production methods?  Are the doubts rooted in science or sentiment?

The first commercially sold genetically modified food, a tomato that takes longer to ripen after picking, hit the market in 1994.  A year later, several food products that were resistant to herbicides or disease joined them.  In the year 2000, scientists were able to increase the nutritional content of a food for the first time, with the creation of golden rice, although its production has been (unnecessarily?) delayed until just now.  Overall though, the practice has been hugely successful, as about 85% of American corn and as much as 91% of soybeans are derived from genetically modified crops.  In fact, the Grocery Manufacturing Association estimates that 70% of items in American food stores contain genetically modified organisms. Betcha didn’t even know.

But really, if you get right down to it, our food crops have been genetically modified since Roman times, and more scientifically since the 1700′s.  The process of selective breeding, through which particular animal and plant individuals are bred to emphasize specific traits, is so ubiquitous that many of our most important crops, including wheat, rice and corn, wouldn’t exist in their current forms without it.  Not to mention broccoli, cauliflower, bananas, ad infinitum.  We’ve been deliberately altering the course of animal and plant evolution for centuries, but few people bat an eye when it’s done in a field.  Why the outcry when it’s done in a lab?

manipulateWell that’s not deliberately manipulative and meant to elicit negative emotions at all, is it?  Image with an agenda from

The answer may lie in the language of GMO opponents, who often refer to the products as “Frankenstein food.”  Marry Shelley wrote her famous tome in 1818, an early example of the European Romantic era, a time characterized by reactionary thought against the Industrial Revolution and the increasing rationalization of nature.  Many balked at recent scientific advances, thinking that man had decided to play God, a theme no more evident than in the novel where a modern Prometheus creates his own man.  Of course science and progress has continued since then, and such concerns almost seem silly in hindsight.  How will our GMO mania appear to our descendants in two hundred years?


We shouldn’t let knee-jerk reactions and queasy feelings dictate progress and public policy.  There are certainly issues with genetically modified crops that need to be monitored, such as the possible hastening of antibiotic resistance and the introduction of allergens to other foods, but that’s why better regulation and testing from the Food and Drug Administration is needed, rather than labels meant to scare people away from already approved products.

It also goes to show that the American right wing doesn’t hold a monopoly on anti-science ideas, as Chris Mooney and others have suggested.  Unfounded objections from Greenpeace and similar organizations illuminate that U.S. liberals can be just as hard-headed and fact-evasive in certain situations.  See also the opposition to nuclear power.

You Are the Product of 4 Billion Years of Evolutionary Success…. Or Not

“Dude, you’re a dumb-ass.  I’ve got the perfect meme for you:”


Ha, yeah, that’s funny man.  Wait, what am I, the wolf?  Are you the wolf?  I mean, I woulda thought you’d go the “humans are the pinnacle of evolution” route, with our big brains and upright stance and twitter and shit.  But wolves are cool too.  I mean really, so is everything.  When you think about it, each individual organism out there in the world is the “product of 4 billion years of evolutionary success.”  (Well, more like 3.5 billion years, but what’s 13% between friends?)  Innumerable generations through unfathomable ages striving, struggling, eating, fucking; all so that bumblebee can buzz by, or the wolf can chow on that rabbit, or you and I can have this conversation.  Everything alive today is the culmination of an incomprehensibly extensive, geologic-scale lineage.

But yeah, human beings are clearly the most successful species on Earth.  Right?  How do you define “success?”  Is it sheer complexity?  We got that down (#awesome).  Okay, so what?  Remember what Stephen Jay Gould pointed out in his book Full House about the “Drunkard’s Walk” (which is also the title of a Leonard Mlodinow book about statistical randomness).  Imagine you get hammered and stumble two steps out the door of the bar.  You’re soused, so you don’t know where you’re going; you’re equally as likely to take a step forward as you are backward.  You can’t go too far back or you’ll hit the wall of the bar.  That’s like the lower limit on complexity.  So you stagger around and eventually, just by chance, you’ll end up way over at the next building, the upper limit of complexity.  Is that an achievement, you fucking lush?  Or was it just that enough time elapsed that you were able to cover all the possibilities?

No, asshole, I’m not saying that evolution is random.  Individual mutations are random, but the ones that best adapt an organism to their particular environment are the ones that get selected for.  “Particular” being the important word there.  To oversimplify, wooly coats won’t help at the Equator and wings do you no good underground.  So is any one trait “better” than another?  Depends on where you are.  Sometimes complexity even hurts you.  Snakes ditched their limbs because they likely hindered burrowing and cavefish traded their eyes for other weapons (unrelated to the Stonecutters’ machinations).  And what if the other guy is ramping up the fitness as fast as you are?  Gazelles get faster, so lions get faster, so gazelles get nimbler, etc., etc.  That doesn’t sound like progress or success.  That sounds a lot like running in place.

All right, forget about success, how about DOMINANCE! We are EVERYWHERE!  And there’s a lot of us!  Seven billion big animals running around.  We’ve filled ecologic niches from the rainforest to the tundra, but bacteria and even simpler forms of life have found ways to withstand the pressures at the bottom of the ocean and the scalding conditions in thermal springs.  They’ve obviously got us beat on pure numbers, but our big bodies don’t compare to their overall biomass, either.  Bacteria globally outweigh us by somewhere around 5,000 times.

But we win because we are FINISHED!  At the peak of our evolution, no more changes!  WRONG!  In addition to simple genetic drift, studies suggest that skin and eye colors are still changing, as well as our tolerance for lactose and wheat.  In fact, human evolution may be happening faster than ever.


Biologists actually have a very precise definition of success, and it has to do with a quantifiable value of fitness.  The upshot is that to be truly successful, an organism has to pass on its genes to future generations, and spread them as far as possible.  So in that case, I can think of at least one 4 billion year end-product that’s not successful.  Cause as long as you keep chucking stupid internet memes at people, you’re never getting laid.


The Shrinking Ozone Hole: Could We Similarly Reverse Climate Change?

Let’s set the record straight.  The depletion of the atmospheric ozone layer and the climate change often referred to as “global warming” are separate, largely unrelated processes.  I remember a comedian maybe 10 years ago making (or should I say attempting) a joke about about discharging aerosol cans during especially cold winter days to hasten the anthropogenic effect on worldwide temperature increases.  Far be it from me to step on a punchline, but there are a couple things wrong with that.  First, the loss of ozone catalyzed by the chlorine atoms from chlorofluorocarbons (CFC’s), the propellants once used in spray cans, does not accelerate the Earth’s greenhouse effect; it’s the enormous amounts of carbon dioxide gas (and, less directly, methane) we introduce into the environment that pulls off that trick.  In fact, man-made ozone can actually act as a greenhouse gas itself when present in the troposphere, the lowest level of our atmosphere, although its impact in that regard is much, much smaller than that of the other offenders.  Ozone depletion instead heightens our risk for skin cancer, as the upper atmospheric layer of the compound functions as something of a shield against carcinogenic ultraviolet radiation from the Sun.

Secondly, the widespread use of CFC’s in consumer products ceased in 1994, thanks to the internationally instituted Montreal Protocol.  Even their less hazardous temporary replacements, hydrochlorofluorocarbons, will be phased out in much of the world by 2020 and will likely disappear entirely a decade later, to be supplanted by non-destructive hydrofluorocarbons.  Due to these burdensome but necessary measures, a study published in February shows that the Antarctic ozone hole has shrunk to its smallest size in 10 years, and some scientists estimate that stratospheric ozone levels should rebound to pre-1970 levels by the year 2050.  Now that’s a system workin’!  With such an unequivocal environmental success now precedented, could we take similar steps to mitigate that other dilemma, global climate change?  Let’s compare and contrast the histories and natures of the two phenomena to find out.

Chlorofluorocarbons are just what they sound like, compounds that contain only chlorine, fluorine, carbon and hydrogen.  Belgian chemist Frédéric Swarts pioneered their synthesis in the 1890′s, and some species were used as fire suppressants during World War II.  The compounds found greater commercial use in refrigerators and aerosol cans in subsequent decades.  University of California at Irvine Scientists Frank Sherwood Rowland and Mario J. Molina first suggested the devastating effects of atmospheric chlorine on ozone in 1974, after James Lovelock had discovered that nearly all the poorly reactive CFC’s ever released into the air were still present.  Unsurprisingly DuPont, a heavy producer of the chemicals and creator of the brand name Freon, called their work “utter nonsense.”  Rowland and Molina were proved right by laboratory experiments and the direct observation by James G. Anderson of chlorine monoxide, a product of the reaction, in the atmosphere.  For their research on the issue, the pair won the Nobel Prize in Chemistry in 1995.

The United States, Canada and Norway banned the use of CFC’s in aerosol cans in 1978, but not much other action was taken until the “holy shit” moment 7 years later.  That was when the British Antarctic Survey team announced massive springtime losses of ozone over the South Pole, “holes” that in later years would reduce the stratospheric presence of the compound by up to 70%.  The occurrence could have been detected much earlier, but NASA scientists hadn’t noticed the numbers, as values that extreme were automatically eliminated from their data, deemed “impossible.”  That’s how bad things had gotten in a short period of time.  The international community immediately took notice and the Montreal Protocol was opened for signature just two years later.  Atmospheric CFC concentrations continued to rise until the year 2000, due to the continued use of previously produced, non-compliant refrigerators and air conditioners, but as those units have failed over time, we’ve developed the much rosier picture of our future we now see, one that’s been painted by urgent action following smart science.

spray cansmokestack

We fixed one, now let’s work on the other.  Images from and Scientific American, respectively

The furor surrounding global climate change has been different in many respects.  One of the main disparities is the speed at which the problems have become grave.  Whereas we’ve been pumping large amounts of carbon dioxide into the atmosphere since the Industrial Revolution began in the late 1700′s, with only a gradual change in temperature, it was just a decade after the mass production of CFC-containing products that their chlorine molecules were shown to pollute the atmosphere, and the titanic damage to the ozone layer was recognized.  That made it easier to pin the effect to the cause.  Unusual warming was first noticed in the United States and North Atlantic in the 1930′s, but it wasn’t until 1960 that carbon dioxide levels were proved to be rising.  But climate is a much more complex monster, and there is more than one factor at play.

Other factors affect global temperature, and we know that climate has changed over geologic history, without our influence.  Volcanic eruptions, variations in solar radiation and the Milankovitch Cycles all play roles, but that really just goes to show how delicate the whole system is and how it’s easily nudged.  We now know that solar activity has actually decreased over recent years, and ancient ice cores have shown us that the correlation between temperature and the atmospheric concentration of carbon dioxide is very strong.  Better and better computer models of the chaotic system have been implemented as technology advances, and now a consensus of 97-98% of climate scientists agrees that global climate change is real, and produced by human beings.


It took us longer to prove human-induced change to the atmosphere in the case of global warming, and now that we’ve gotten there, the people who stand to lose out if measures are taken to halt it still deny, just like DuPont did into 1987.  But even if the oil companies and others were on board, what could be done?  The worldwide economy is much more dependent on the burning of fossil fuels, the largest contributor to the problem, than it was the use of fridges and spray cans.  And then a less harmful substitute was easily found, whereas the research into alternate energy sources is splintered and slow-moving.  Could we use some sort of agreement that forces us to drastically reduce our carbon dioxide emissions, akin to the Montreal Protocol?  I mean something actually useful, unlike Kyoto?  Would it even help?  Some predictions show that even if all carbon emissions stopped today, we couldn’t naturally return to a pre-industrialization climate for a thousand years.

That all seems like a major bummer, but the ozone depletion parable has shown us that as good as we are at wrecking our future, we can also find ways to solve the problems we’ve created.  It’s no longer in dispute that puny humans can affect serious changes, and now we also know that taking stringent and serious action can swing the pendulum back.  It’ll take brilliantly creative technologies, but it’s not our ingenuity that’s questionable.  It’s our commitment.  We’ve come together to fix our screw-ups before.  We’ll have to do it again.  Do we need a “holy shit” moment to get our brains and asses into gear?  Don’t brutal heat waves and superstorms fit that bill?

Much of the history of climate change research described here was taken from the wonderful American Institute of Physics resource at this location.

CSI Milky Way: Cosmic Ray Shooters Finally Identified

We are under constant attack and we cannot escape our assailant.  No matter where we hide, each of us is riddled with 30 hyper-speed bullets every second.  You can’t run, either.  We’d probably be dead by the time we got to the neighbor’s.  Don’t bother trying to call for help; the perpetrator also targets our communication satellites.  We’ve known of this unprovoked assault for ages, but only recently have we conclusively proved just who’s out to get us.

The electricity and ionization in the atmosphere was the first clue that something was up.  There was something very energetic disrupting our atoms.  Henri Becquerel’s 1896 discovery of radioactivity seemed to make this an open and shut case, as decay of heavy isotopes within the Earth took the blame.  International intrigue cast doubt on the culprit’s identity in 1909, when German physicist Theodor Wulf took an elctrometer to the top of the Eiffel Tower and found the levels of radiation there were actually greater than at the ground surface.  No one on the beat believed him.  The case grew cold.

Victor Hess, though, was unsatisfied.  He wanted to go higher.  in 1912 Hess used a hot air balloon to take three enhanced versions of Wulf’s electometers to a height of 17,000 feet, where they measured an ionization rate four times what you’d expect at sea level.  Wulf had been right.  The barrage was coming from beyond, not from within.  The Sun became the next suspect, but Hess was able to rule it out by performing the same experiment during a near-total solar eclipse, with the moon blocking much of the Sun’s radiation.  The tricky detective work earned him the Nobel prize in 1936, but it ultimately left us with more questions than answers.  What was causing the ionization, and who was behind it?

Robert Millikan had already worked his way up the ranks when he picked up the assignment in the 1920′s.  Coining the term “cosmic rays,” Millikan believed gamma rays were the offender’s ammunition of choice.  But the ballistics didn’t check out.  In 1927 J. Clay dared to question the respected veteran by pointing out that cosmic rays were more intense at higher latitudes, swept there by the Earth’s magnetic field.  They couldn’t be light waves like gamma radiation; they had to be charged particles.  Experiments in the ’30′s, spurred by Bruno Rossi’s insights, showed that cosmic ray intensity was also greater from the west, indicating the particles were positively charged.  The weapon had been found.  High-speed protons.  But what could accelerate the tiny projectiles to such velocities, as high as 95% the speed of light?

cosmic rayImage from (where else?)

Fingering the perp continued to prove difficult.  The line-up over the decades included magnetic variable stars, nebulae, active galactic nuclei and more, but no single scofflaw could be picked out. Supernovae became the prime suspects, as the expanding envelopes from their explosions could possibly provide the power needed to boost the protons’ speed to deadly levels.  Charged particles get deflected by other matter as they rocket through space, however, making the locations of the shooters hard to pin down.  The forensics were hand-cuffed.  Technology had to advance to uncover the well-hidden tracks.

Beginning in 2008, Stanford University’s Stefan Funk and his team set up a four year stakeout with the Large Array Telescope of NASA’s Fermi Gamma-ray Space Telescope.  They focused their attention on two supernova remnants in the Milky Way.  As it turned out, gamma rays were the key after all. While the trajectories of the proton bullets themselves may be too tricky to track, their gamma ray by-products zip right through, unaltered.  In February of 2013 the group announced that the observed energies matched the predictions.  They had a positive ID.


Pop the corks and call the D.A. After a century of hard-nosed investigation, we’ve got our man.  It was a circuitous route to the truth that in a way brought us back to where we started.  It goes to show that when you start tugging on a tangled thread, you never know where it’ll lead.  Gut instincts and gumshoe hunches only get you so far if you don’t have the observations and technology to back them up, though.  Even seemingly intractable cases can be solved given enough progress of time and technique.

I do hope the judge goes easy on the sentence, though.  Despite their cosmic ray malfeasance, supernovae have a history of community service.  It’s thought that elements heavier than lead can only be produced in stellar explosions, meaning many of our most precious minerals wouldn’t be here without them.  Some even speculate that a supernova may have triggered the collapse of the dust cloud that formed the entire solar system to begin with.  What a crazy, mixed up universe we live in, where a progenitor can turn on its own creation.  I’m getting too old for this shit.

Older and Wiser: How the Age of the Universe Has Been Expanded and Refined Over Time

Isaac Newton was arguably the most brilliant scientist in history.  He was certainly unrivaled in his lifetime, during which he invented the reflecting telescope, developed his famous universal laws of gravitation (daring to unite the heavens and the Earth as governed by the same processes) and pioneered the use of mathematical equations in the scientific enterprise, constructing the realm of calculus in the process.  He also used the power of mathematics, and the Bible, to calculate that the Earth was created sometime around 4,000 BC.  Well, even the best can trip up.

This figure also assumed that the world had existed more or less as it is since that beginning, and that the great variations in landscape we see had been caused by catastrophic events such as floods and powerful earthquakes.  James Hutton proposed in 1795 that instead our great canyons and mountain ranges had been formed by the same processes of erosion and weathering we observe today, necessarily taking place over much vaster lengths of time.  Catastrophism thus gave way to gradualism, an idea further cemented by Charles Lyell, one of the pillars of geology.

So the Earth was much older than we thought, but how much older?  in 1897, Lord Kelvin assumed that our world began in a molten state and calculated it would need 20-40 million years to cool to its present temperature.  A helluva lot older than 6,000 years, but still not close by a longshot.  Kelvin didn’t know about radioactive decay, which contributes enormous amounts of heat that “fooled” him into thinking the Earth was younger than it is.  Thankfully we now understand the process well, so much so that we’ve used radiometric dating to finally get a good handle on the planet’s age, a whopping 4.5 billion years, a figure we still didn’t arrive at until the 1950′s.

Okay, okay, so the Earth has changed drastically over time, but the universe – now THAT’s eternal and unchanging!  Right?  I mean, that’s what Einstein thought.  He wanted the universe to remain static so badly that when his own theory of relativity showed that it must be expanding, he threw in a MacGuffin called the “cosmological constant” to fudge the numbers!  And if there’s one thing we’ve learned, it’s that the most brilliant scientist of his time can’t be wrong!  Wait…

Edwin Hubble refined the first numerical estimates for the universe’s expansion made by George Lemaître, and by combining that with the known distances to certain astronomical features, he arrived in 1929 at a cosmic lifespan of 2 billion years, a figure he himself called “suspiciously short,” as many stars seemed older than that and the geologists had already brought the age of the Earth to at least 3 billion years.  Better distance measurements to Cepheid variable stars and quasars in subsequent decades continued to raise the age of the universe, from 6, to 10, to 12 billion years.

In 2008, the Wilkinson Microwave Anisotropy Probe (WMAP) revealed the most precise appraisal yet, further aging the universe to 13.7 billion years.  And now the European Space Agency’s Planck spacecraft has used the same primordial quantum fluctuations to kick it a tiny bit more, for a final figure of 13.82 billion years.


Thanks in part to the Planck spacecraft, the universe if billions of years older than it was a century ago


The continuing changes to a single piece of information may seem damning at first blush, but it’s really a monument to how beautifully self-correcting science is.  No ideas are sacred, no matter who came up with them,  and all positions are open to revision when new or better evidence is presented.  If the argument of an iconoclast is sound, it cannot be ignored.  Not all institutions are so democratic or flexible.

But then again, the wiggly, fiddly findings do also reinforce that science can only offer approximations of the way things are.  No one can provide true certainty, but through the advancement of ideas and technology, our approximations of reality become ever better and the picture of our vast, ancient universe becomes clearer.