Top Science/Law Story of 2012: Manslaughter Conviction of Seismologists

To me, choosing a “top” news story that implicates law and science that occurred in 2012 is easy:  the manslaughter conviction of six Italian scientists who failed to predict the April 2009 earthquake in L’Aquila that killed 309 people.

But before I discuss that story, I want to provide context with another case that occurred 20 years ago, one that many still point to as an example of all that is wrong with the American tort system.  After all, how can the system be working at all sensibly when a woman is awarded millions of dollars in damages for spilling a cup of coffee on herself?

The story of Stella Liebeck is notorious.  But its facts have so frequently been misreported that it really provides an important cautionary tale.  In February, 1992, the 79-year-old resident of Albuquerque purchased a cup of coffee through the drive-through of a fast-food restaurant.  She was a passenger in a car driven by her grandson, who stopped the car so she could add cream and sugar.  While supporting the cup between her knees, she attempted to remove the plastic lid from the Styrofoam cup, but spilled the entire cup, suffering third-degree burns in the area of her groin and upper legs.  She required an eight-day hospital stay during which she needed to receive skin grafts.  She wanted to settle her claim against the McDonald’s Corporation for $20,000 to cover her medical bills, but her offer was refused with a counteroffer of $800.

Evidence at trial showed that the restaurant routinely maintained its coffee at a temperature of about 190°F, a temperature known to cause third-degree burns in seconds when in contact with the skin and which its own quality-assurance manager testified was not fit for human consumption.  Other evidence at trial established that coffee is usually served at about 140°F, that the restaurant knew of more than 700 people previously burned by its coffee over a ten year period, including serious third-degree burns, and that it had received numerous complaints by consumers and safety organizations about the unsafe temperature of its coffee.

The jury found the defendant company negligent and also found the plaintiff contributorily negligent, apportioning their responsibility for the harm at 80% and 20% respectively.  The $200,000 damages award was accordingly reduced to $160,000.  But the case’s notoriety arose from the jury’s decision to award two days worth of the company’s coffee sales—$2.7 million—as punitive damages, an amount that was reduced by the judge to triple the compensatory damages, i.e. to $480,000.

The true lesson of Liebeck v. McDonald’s Corporation is not one that damns the tort system of the United States.  Rather, the lesson is one that demands looking deeper when the news reports stories that appear outlandish.  After all, how can twelve jurors and a judge really award significant damages to a woman for spilling coffee on herself?  It just doesn’t make sense without understanding the actual facts of the case.

I failed to apply that lesson when I commented on the L’Aquila earthquake a year and a half ago here, when manslaughter charges were first brought against the seismologists.  To me, the charges represented little more than a fundamental misunderstanding of science and an unseemly search for a scapegoat.  When I first heard the charges had resulted in convictions and sentences for six-years’ imprisonment for the scientists, my reaction was one of astonishment and a readiness for wholesale condemnation of the Italian legal system.

But if we apply the same sober thought that I advocate in judging Ms Liebeck, a similar question arises—how, really, can anyone condemn scientists to prison terms for failing to predict an earthquake?  It makes no sense.

The reality is that the scientists were not convicted because of their failure to predict the earthquake.  Rather, they were convicted because of their involvement in misleading information being communicated to the public.

Prior to the earthquake, a series of tremblors had been shaking the town, causing widespread concern that a large earthquake was imminent.  Concerns were exacerbated because laboratory technician Giampaolo Giuliana had appeared on Italian television about a month before, predicting a major earthquake in the area.  He based his prediction on increased levels of radon emission in the area, a measure that has been studied by seismologists but that has produced only inconsistent results.  Italy’s Commissione Nazionale per la Previsone e Prevenzione dei Grand Rischi (“National Commission for the Forecast and Prevention of Serious Risks”) was accordingly convened to assess the danger and to communicate that assessment to the public.

The public was told, in an interview with Bernardo De Bernardinis, one of the members of the commission, that the seismic situation in L’Aquila was “certainly normal” and posed “no danger,” adding that “the scientific community continues to assure me that, to the contrary, it’s a favorable situation because of the continuous discharge of energy.”  When asked whether the appropriate response to the series of tremblors was to sit back and enjoy a glass of wine, De Bernardinis replied, “Absolutely, absolutely a Montepulciano doc.  This seems important.”  The result has been characterized as a “sigh of relief” that propagated through the town:  “It was repeated almost like a manta:  the more tremors, the less danger.”

The circumstances under which the statements were made are complicated by several factors, including a perceived need to respond to the admittedly misleading statements of Giuliana based on radon measurements.  Evidence suggests a decision before the commission meeting was held of the need “to reassure the public” and to “shut up any imbecile,” presumed to be a reference to the technician.  Evidence also suggests that statements made by the seismologist members of the commission during the one-hour meeting itself were more appropriately measured than those delivered to the public:  “It is unlikely that an earthquake like the one in 1703 could occur in the short term, but the possibility cannot be totally excluded” said one member; another noted that “in the seismically active area of L’Aquila, it is not possible to affirm that earthquakes will not occur.”  But the public interview with De Bernardinis was held without the presence of those seismologists, who have subsequently been convicted of manslaughter largely because of their failure to contradict the inaccurate statements that were made in their name as members of the commission.

It is at least somewhat heartening that the convictions were based not on an inability of scientists to predict earthquakes but on the entirely human decisions of what statements to make to the public.  We can even acknowledge that those decisions were at least imperfect and even probably ill-considered.  Still, even with that greater understanding, the convictions remain troubling, with the penalty for errors of judgment seeming all out of proportion.

The Italian judge responsible for the convictions will be releasing full reasons for his decision in a matter of weeks, at which time they can be fully evaluated and criticized.  There is no question his decision will be appealed and that many parties representing the interests of scientists will be put forth.  We can expect these largely to take the form of speaking of the need for the public to have accurate information and for scientists to have the freedom to provide that information without fear of unreasonable prosecution.  Guided by the principle that I think is effectively illustrated by the travesty of second-guessed judgment inflicted on Ms Liebeck, it as appropriate to wait for the judge’s full reasoning before becoming too decisively critical.

(Note:  I have been inactive on this blog for too long.  I don’t like making “New Year’s Resolutions,” but hope to be more productive with it in 2013.  Happy New Year!)

You Get What You Pay For

Be careful about provoking scientists.  They can be persistent.

The issue I’m writing about today was first raised a quarter century ago when Henry Herman (“Heinz”) Barschall published two papers in Physics Today about the cost of academic journals.  He performed analyses of various physics journals at the time, based on some widely used metrics that attempt to quantify the “value” of a journal in terms of its reach and citation frequency.  His analyses showed that according to his criteria, physics journals published by the American Institute of Physics (“AIP”) and by the American Physical Society (“APS”), two not-for-profit academic societies devoted to physics in the United States, were more cost effective than other journals, particularly journals produced by commercial publishers.  Copies of his articles can be found here and here.

His articles prompted a series of litigations in the United States and Europe alleging that they amounted to little more than unfair advertising to promote the academic-society journals that Barschall was associated with.  At the time, he was an editor of Physical Review C, a publication of the APS, and the magazine Physics Today where he published his findings was a publication of the AIP.  (In the interest of disclosure, I was also previously an editor of an APS publication and also recently published a paper in Physics Today.  I have also published papers in commercial journals.)  The academic societies ultimately prevailed in the litigations.  This was relatively easily accomplished in the United States because of the strong First Amendment protection afforded to publication of truthful information, but was also accomplished in Europe after addressing the stronger laws that exist there to prevent comparisons of inequivalent products in advertising.  A summary of information related to the litigations, including trial transcripts and judicial opinions can be found here.

The economics of academic-journal publication are unique, and various journals have at times experimented with different models in order to address the issues that are particular to the publication of academic journals.  They have as their primary function to disseminate research results and to do so with a measure of reliability that is obtained by their use of anonymous peer review.  Despite their importance in generating a research record and in providing information of use to policymakers and others, such journals tend to have a small number of subscribers but relatively large production costs.  Consider, for instance, that Physical Review B, the journal I used to work for and one of the journals that Barschall identified as most cost-effective, currently charges $10,430 / year for an online-only subscription to the largest research institutions, and charges more if print versions are desired.

While commercial journals have generally relied upon subscription fees to pay their costs, academic-society journals have been more likely to keep subscription fees lower by imposing “page charges” that require the authors to pay a portion of the publication costs from their research budgets.  The potential impact on their research budgets has sometimes affected researcher decisions about where to submit their papers.  More recent variations on economic models distinguish among subscribers, charging the highest subscription rates to large institutions where many researchers will benefit from the subscription, and charging lower subscription rates to small institutions and to individuals.  All of these economic models have needed to compete more recently with the growing practice of posting research papers in central online archives, without fee to either researcher or reader.  The reason that traditional journals still exist despite the presence of these online archives is that they provide an imprimatur of quality derived from their use of formalized peer-review considerations that are still relied on by funding bodies and other government agencies.

As reported this weekend in The Economist, Cambridge University mathematician Timothy Gowers wrote a blog post last month that outlined his reasons for boycotting research journals published by the commercial publisher Elsevier.  A copy of his post can be read here.  The post has prompted more than 2700 researchers to sign an online pledge to boycott Elsevier journals by refusing to submit their work to them for consideration, by refusing to referee for them, and by refusing to act as editors for them.  My objective is not to express an opinion whether such a boycott is wise or unwise.  The fact is that journals compete for material to publish and for subscriptions.  While they accordingly attempt to promote distinctions that make them more valuable than other journals, they are also affected by the way their markets respond to those distinctions.

What is instead most interesting to me in considering the legal aspects of this loose boycott is the extent to which it is driven by a general support among commercial publishers for the Research Works Act, a bill that was introduced in the U.S. Congress in December and that would limit “open access” to papers developed from federally funded research.  The Act is similar to acts that have been proposed in 2008 and 2009.  Specifically, it would prevent federal agencies from depositing papers into a publicly accessible online database if they have received “any value-added contribution, including peer review or editing” from a private publisher, even if the research was funded by the public.

The logic against the Act is compelling:  why should the public have to pay twice, once to fund the research itself and again to be able to read the results of the research it paid for?  But this logic is very similar to the arguments raised decades ago against the Bayh-Dole Act, which allowed for patent rights to be vested in researchers who developed inventions with public funds.  The Bayh-Dole Act also requires the public to pay twice, once to fund the research itself and again in the form of higher prices for products that are covered by patents.  Yet by all objective measures, the Bayh-Dole Act has been a resounding success, leading to the commercialization of technology in a far more aggressive way than had been the case when the public was protected from such double payments — and this commercialization has been of enormous social benefit to the public.

The Bayh-Dole Act has been successful because it provides an incentive for the commercialization of inventions that was simply not there before.  This experience should not be too quickly dismissed.  In the abstract, the Research Works Act is probably a good idea because it provides an incentive for publishers to provide quality-control mechanisms in the form of peer review that allows papers to be distinguished from the mass of material now published on the Internet without quality control.  This is worth paying for.  But the Act also has to be considered not in the abstract, but instead in the context of what not-for-profit academic societies are already doing in providing that quality control.

The ultimate question really is:  Are commercial publishers providing a service that needs to be protected so vitally that it makes sense to subject the public to double payment?  The history of the Bayh-Dole Act proves that the intuitive answer of “no” is not necessarily the right one.  But what the commercial publishers still need to prove in convincing the scientific community that the answer is “yes” is, in some respects, no different than the issue Heinz Barschall raised 25 years ago.

Tending Towards Savagery

I am no particular fan of the monarchy, but Prince Charles was given a bad rap in 2003 when he called for the Royal Society to consider the environmental and social risks of nanotechnology.  “My first gentle attempt to draw the subject to wider attention resulted in ‘Prince fears grey goo nightmare’ headlines,” he lamented in 2004.  Indeed, while yet somewhat misguided, the Prince’s efforts to draw attention to these issues were genuine and not far from mainstream perceptions that scientists sometimes become so absorbed with their discoveries that they pursue them without sober regard for the potential consequences.  A copy of his article can be read here, in which he claims never to have used the expression “grey goo,” and in which he makes a reasonable plea to “consider seriously those features that concern non-specialists and not just dismiss those concerns as ill-informed or Luddite.” 

It is unfortunate that the term “grey goo” has becomes as inexorably linked with nanotechnology as the term “frankenfood” has become associated with food derived from genetically modified organisms.  The term has its origins in K. Eric Drexler’s 1986 book Engines of Creation

[A]ssembler-based replicators will therefore be able to do all that life can, and more.  From an evolutionary point of view, this poses an obvious threat to otters, people, cacti, and ferns — to the rich fabric of the biosphere and all that we prize…. 

“Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage.  Tough, omnivorous “bacteria” could out-compete real bacteria:  they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days.  Dangerous replicators could easily be too tough, small, and rapidly spreading to stop…. 

Among the congoscenti of nanotechnology, this threat has become known as the “gray goo problem.” 

Even at the time, most scientists largely dismissed Drexler’s description as unrealistic, fanciful, and needlessly alarmist.  The debate most famously culminated in a series of exchanges in 2003 in Chemical and Engineering News between Drexler and Nobel laureate Richard Smalley, whose forceful admonition was applauded by many: 

You and people around you have scared our children.  I don’t expect you to stop, but I hope others in the chemical community will join with me in turning on the light and showing our children that, while our future in the real world will be challenging and there are real risks, there will be no such monster as the self-replicating mechanical nanobot of your dreams.

 Drexler did, in the end, recant, conceding in 2004 that “[t]he popular version of the grey-goo idea seems to be that nanotechnology is dangerous because it means building tiny self-replicating robots that could accidentally run away, multiply, and eat the world.  But there’s no need to build anything remotely resembling a runaway replicator, which would be a pointless and difficult engineering task….  This makes fears of accidental runaway replication … quite obsolete.”  But too many others have failed to take note, as sadly highlighted by this month’s bombing of two Mexican professors who work on nanotechnology research. 

Responsibility for the most recent bombings, as well as other bombings in April and May, has been claimed by “Individualidades tendiendo a lo Salvaje” (roughly translated into English as “Individuals Tending Towards Savagery”), an antitechnology group that candidly claims Unabomber Ted Kaczynski as its inspiration.  The group even has its own manifesto.  It is not as long as the Unabomber’s but is equally contorted in attempting to justify the use of violence as a means of opposing technological progress.  A copy of the original manifesto can be read here and an English translation can be found here

The manifesto references Drexler when it cites the absurd rationale for the group’s violence: 

[Drexler] has mentioned … the possible spread of a grey goo caused by billions of nanoparticles self-replicating themselves voluntarily and uncontrollably throughout the world, destroying the biosphere and completely eliminating all animal, plant, and human life on this planet.  The conclusion of technological advancement will be pathetic, Earth and all those on it will have become a large gray mass, where intelligent nanomachines reign.

 No clear-thinking person supports the group’s use of violence.  But at the same time, there are many nonscientists who are suspicious of the motivations that underlie much of scientific research.  One need only look at the recent news to understand the source of that distrust:  just this week, the Presidential Panel for the Study of Bioethical Issues released a report detailing atrocities commited by American scientists in the 1940’s that involved the nonconsensual infection of some 1300 Guatemalans with syphilis, gonorrhea, or chancroid.  There are many other examples where scientists have engaged in questionable practices with a secrecy that is counter to the very precepts of scientific investigation. 

“Nanotechnology” is a wonderful set of technologies that have already found their way into more than 1000 commercial products being sold in the electronic, medical, cosmetics, and other markets.  But even though the use of nanotechnology is spreading, many remain concerned that it is unwise to allow it, even if they would not go so far as to bomb the scientists working on the technology.  Here I find myself sympathetic with the real message that Prince Charles was attempting to spread — namely, that the concerns of the nonscientist public need to be addressed, even if those concerns seem to be ill-conceived.

The Final Frontier

Circling the earth in the orbital spaceship, I marveled at the beauty of our planet.  “People of the world!  Let us safeguard and enhance this beauty — not destroy it!”   —Yuri Gagarin

Yuri Gagarin

It was 50 years ago today that the 108-minute orbital flight of Yuri Gagarin ushered in the modern space era.  On April 12, 1961, the 27-year old Gagarin made his way in the early morning to the Baikonur Cosmodrome in what is now Kazakhstan.  The launch pad from which he took off in the rocket that carried the single-man Vostok 1 spacecraft remains in use today:  the latest crew of the International Space Station was launched from the same site last week, and to this day cosmonauts ritually stop on the way to “take a leak,” just as Gagarin did that morning.  Gagarin completed a single orbit in his spacecraft before returning to Earth, ejecting himself from the craft at an altitude of about 4 miles and returning to land by parachute.  It was only a few years later, in 1968, that Gagarin would die in a routine training accident, shortly after he had been scheduled for a second mission into space. 

The launching of the “space race” is one that drew humanity together in a time when the world was plagued by the political divisions of the cold war.  To be sure, there was competition between Americans and Soviets in reaching landmark achievements in the exploration of space, but the world also saw the accomplishments of Gagarin, Armstrong, and others more majestically as the accomplishments of Man.  Many of my personal friends were influenced to pursue careers in astronomy and physics because of the excitement of exploration those role models exemplified.  And it is with a certain sadness that they note that it has been almost 40 years (December 19, 1972) since a human being walked on the surface of the Moon.  Like all things, the nature of Man’s relationship with space has changed, as perhaps most iconically exemplified at the moment by the planned termination of the U.S. Space Shuttle program. 

Today, the most pressing concerns for outer space are not its exploration as much as they are its commercial uses.  There are the numerous satellites that have been placed in orbit over the years to provide telecommunications services, resulting in the need to manufacture uplink and downlink terminals, transponders, mobile satellite telephone units, direct-to-home receivers, and other components in addition to the satellites themselves.  There is the use of satellite imagery in the fields of agriculture, geology, forestry, biodiversity conservation, military intelligence, and others, as exemplified by the GeoEye, DigitalGlobe, Spot Image, RapidEye, and ImageSat International projects.  There are the proliferation of satellite navigation systems in the form of global positioning systems in the United States, and the development of similar systems in Russia (GLONASS), China (Compass), and Europe (Galileo).  There is the current development of high-altitude platforms, which are quasi-stationary aircraft that may be deployed at altitudes of 17 – 22 km to provide services for several years.  There are even examples of space tourism as exemplified by Dennis Tito’s tourist flight to the International Space Station in 2001; several companies are now planning “economical” suborbital flights to altitudes of some 100 – 160 km so that tourists can experience the weightlessness and striking views of being in outer space. 

But where is outer space exactly?  The question is not an idle one and can have numerous effects because it defines what law is applicable:  is it the law as embodied in one of the five U.N. treaties related to space or is it a national aviation or other law of the sovereign territory “below” the relevant location?  Historically, the property law was deceptively simple:  “Cuius est solum, eius est usque ad coelum et ad inferos” (“the owner of the land owns everything up to the sky and down to the center of the earth”).  The simple idea that each of us owns all of the airspace above our homes is a quaint one but hopelessly unrealistic in modern times. 

As a principle of private ownership, usque ad coelum was soundly rejected by the U.S. Supreme Court in United States v. Causby when Thomas Lee Causby complained that flights of military aircraft at an altitude of 83 feet to a nearby Greensboro airport during World War II were so frightening to his chickens that he was forced to abandon his farm business.  The Supreme Court held that the airspace was a “public highway,” and that while a landowner might be entitled to compensation from the government, he has no right to prevent use of the airspace.  A copy of the decision can be found here.

The doctrine retains relevance in the form of national rights.  The 1944 Chicago Convention on International Civil Aviation asserts that “[e]very state has complete and exclusive sovereignty over airspace above its territory,” leading on occasion to international disputes when aircraft intentionally or accidentally enter another country’s airspace.  A copy of the Convention can be found here

But  just as Causby was frustrated by national rights superseding his private rights, so too nations may be frustrated by having a limit to the extent of their airspace rights.  The Outer Space Treaty rejects national rights over outer space, declaring that “the exploration and use of outer space shall be carried out for the benefit and in the interests of all countries and shall be the province of all mankind.” 

So far, there is no internationally recognized limit where national airspace ends and outer space begins.  When the topic has come up in past international discussions, it has generally been decided that there was no current need for a hard definition.  Indeed, the topic was again one focus of the 50th session of the Legal Subcommittee of the U.N.’s Committee on the Peaceful Uses of Outer Space last week.  During that session a number of potential ways of defining outer space were considered, including both physical definitions and functional definitions.  The various definitions that have been floated over the years appear to be converging around an altitude of 100 km, particularly at the von Kármán line where the Earth’s atmosphere becomes too thin for aeronautical purposes.  It is at the von Kármán line that a vehicle would have to travel faster than orbital velocity to derive adequate aerodynamic lift from the atmosphere to support itself. 

It is worth noting that even at altitudes far greater than 100 km, there are already disputes.  The geostationary orbit has a period equal to the Earth’s rotational period so that satellites placed in that orbit appear stationary relative to the Earth.  It occurs directly above the geographic equator at about 36,000 km.  In 1976, eight countries through which the equator passes (Brazil, Colombia, Ecuador, Indonesia, Congo, Kenya, Uganda, and Zaire) signed the Bogota Declaration to assert their claim that the geostationary orbit is a “scarce national resource” that is not a part of outer space.  Since the Declaration was signed, other equatorial nations have asserted claims of ownership to their overhead geostationary arcs.  Thus far, the Declaration has been ignored by nations wishing to place satellites in the geostationary orbit, and while the issue of the Bogota Declaration is repeatedly discussed at the U.N., it has been given no legal recognition.  A copy of the Declaration may be found here.

Even though only a handful of humans have been in outer space, it has always and still holds a fascination for us.  Just as we do, our ancient ancestors looked up at the sky — the Sun, the Moon, the stars — and saw reflections of every aspect of our humanity, whether it be romance or war.  To me, the legal issues of how we deal with outer space are, in their own way, just as fascinating as the scientific ones.

Of Hares and Lions

Alarmed at sound of fallen fruit
A hare once ran away
The other beasts all followed suit
Moved by that hare’s dismay.

They hastened not to view the scene,
But lent a willing ear
To idle gossip, and were clean
Distraught with foolish fear.

The quotation is a translation from The Jataka, a body of Indian literature that relates to previous births of the Buddha, and dates to somewhere around the third or fourth century BC.  The story is perhaps the origin of the more modern story of Chicken Little, and tells the parable of a hare that lived at the base of a vilva tree.  While idly wondering what would become of him should the earth be destroyed, a vilva fruit made a sound when it fell on a palm leaf, causing the hare to conclude that the earth was collapsing.  He spread his worry to other hares, then to larger mammals, all of them fleeing in panic that the Earth was coming to an end. 

When radiation is discussed in the news, I often think of the story of the hare and vilva tree.  It seems that we are perpetually confronted with calls to overregulate radiation based on irrational fears of its effects on the human body.  Here I mentioned the persistent fears about radiation from cell phones and the passing of legislation in San Francisco last year requiring that retailers display radiation-level information when selling such devices, but there are many others that appear repeatedly in the news:  radiation from power lines, from computer screens, from cell-phone towers, and from any number of common household devices have at times been alleged to be responsible for causing cancer.  All of these allegations have uniformly been discredited in thousands of scientific publications over the last decades because they do not produce radiation at energies sufficient to break chemical bonds, a factor that is critical in the mechanism by which cancer is caused. 

Most recently, of course, there have been widespread reports of radiation being emitted from the damaged Fukushima Dai-ichi power plants.  There have been demands for governments to suspend the use of nuclear reactors in generating power and at least the German government appears to be acceding, with Chancellor Angela Merkel using absurd hyperbole in characterizing what is happening in Fukushima as a “catastrophe of apocalyptic dimensions.” 

There are legitimate concerns about radiation.  It does cause cancer in human beings.  Regulating our exposure to harmful radiation is an important and necessary role for governments.  But at the same time, rationality must surely prevail based on accurate scientific understanding of the mechanisms by which radiation causes cancer. 

There are a number of simple facts relevant to the debate. 

First, human beings have evolved in an environment in which we are continually exposed to radiation  — from cosmic rays, the Sun, and the ground all around us — and our bodies are adapted to exist with certain levels of radiation.  Indeed, the biological mechanisms by which we evolved as human beings are intimately related to that radiation exposure.  A wonderful graphic produced by Randall Munroe at xkcd illustrates different exposure levels and can be seen here.  One amusing fact shown in the chart is that consumption of a banana exposes a person to more radiation than living within fifty miles of a nuclear power plant for a year — and is dramatically less than the exposure from the natural potassium that exists in the human body. 

Second, there are only a limited number of viable options to produce energy at the levels demanded by modern society.  These are basically through the use of coal power generation and the use of nuclear power generation.  Other “clean” technologies that are frequently pointed to, such as wind and solar power generation, are wonderful technologies that are worth pursuing and which may one day be efficient enough to provide adequate levels of energy to replace coal and nuclear methods.  But that day is not here yet.  Those technologies are simply incapable of producing energy at the levels needed to support modern society and, in any event, have their own environmental concerns that need to be considered and addressed.  I commented, for example, on some environmental concerns associated with wind power generation here

Third, with the particular safety mechanisms that are in place, nuclear power generation is safer than coal generation, its only realistically viable alternative.  The xkcd chart cited above notes, for instance, that there is greater radiation exposure from living within 50 miles of a coal power plant for a year than living within the same distance from a nuclear power plant for a year.  The Clear Air Task Force, an organization that has been monitoring the health effects of energy sources since 1996, released a report last year finding that pollution from existing coal plants is expected to cause about 13,200 deaths per year, in addition to about 9700 hospitalizations and 20,000 heart attacks.  A copy of the report can be read here.  Nuclear power generation is also more environmentally responsible since it does not release climate-changing greenhouse gases in the way that coal power generation does.  When considering policy to reduce or eliminate the use of nuclear power — driven largely because the smaller number of deaths from nuclear power result from isolated high-profile events instead of the greater number of deaths that result from the persistent low-level effects of coal power generation — it is important to take these comparisons into account. 

I want to offer a final comment about radiation hormesis, which was recently raised most prominently by political commentator Ann Coulter in the context of the Fukushima event, although it has also been raised by other, more scientifically reliable, sources.  For example, Bob Park, a respected physicist who comments regularly on science and government policy, raised it in the context of a recent study that he interprets as showing that “Chernobyl survivors today suffer cancer at about the same rate as others their age [and that t]he same is true of Hiroshima survivors.”  His remarks can be found here.   Radiation hormesis is an effect in which low-level exposure to radiation produces beneficial effects and that has apparently been observed in laboratory settings.  It has not been convincingly confirmed in human beings and the exposure rates to produce the effect are, in any event, low.  My own view is that pointing to radiation hormesis as a positive argument for the use of nuclear power is counterproductive.  The effect is too speculative and there are too many other — stronger — arguments in its favor. 

The Jataka tells us that when the panicked masses led by the timid hare met a lion, it was he who restored their sensibility.  He brought them to their senses by looking coldly at the facts and determining that the earth was not breaking apart; it was only the misunderstood sound of a vilva fruit.  Let’s be lions, not hares.