Top Science/Law Story of 2012: Manslaughter Conviction of Seismologists

To me, choosing a “top” news story that implicates law and science that occurred in 2012 is easy:  the manslaughter conviction of six Italian scientists who failed to predict the April 2009 earthquake in L’Aquila that killed 309 people.

But before I discuss that story, I want to provide context with another case that occurred 20 years ago, one that many still point to as an example of all that is wrong with the American tort system.  After all, how can the system be working at all sensibly when a woman is awarded millions of dollars in damages for spilling a cup of coffee on herself?

The story of Stella Liebeck is notorious.  But its facts have so frequently been misreported that it really provides an important cautionary tale.  In February, 1992, the 79-year-old resident of Albuquerque purchased a cup of coffee through the drive-through of a fast-food restaurant.  She was a passenger in a car driven by her grandson, who stopped the car so she could add cream and sugar.  While supporting the cup between her knees, she attempted to remove the plastic lid from the Styrofoam cup, but spilled the entire cup, suffering third-degree burns in the area of her groin and upper legs.  She required an eight-day hospital stay during which she needed to receive skin grafts.  She wanted to settle her claim against the McDonald’s Corporation for $20,000 to cover her medical bills, but her offer was refused with a counteroffer of $800.

Evidence at trial showed that the restaurant routinely maintained its coffee at a temperature of about 190°F, a temperature known to cause third-degree burns in seconds when in contact with the skin and which its own quality-assurance manager testified was not fit for human consumption.  Other evidence at trial established that coffee is usually served at about 140°F, that the restaurant knew of more than 700 people previously burned by its coffee over a ten year period, including serious third-degree burns, and that it had received numerous complaints by consumers and safety organizations about the unsafe temperature of its coffee.

The jury found the defendant company negligent and also found the plaintiff contributorily negligent, apportioning their responsibility for the harm at 80% and 20% respectively.  The $200,000 damages award was accordingly reduced to $160,000.  But the case’s notoriety arose from the jury’s decision to award two days worth of the company’s coffee sales—$2.7 million—as punitive damages, an amount that was reduced by the judge to triple the compensatory damages, i.e. to $480,000.

The true lesson of Liebeck v. McDonald’s Corporation is not one that damns the tort system of the United States.  Rather, the lesson is one that demands looking deeper when the news reports stories that appear outlandish.  After all, how can twelve jurors and a judge really award significant damages to a woman for spilling coffee on herself?  It just doesn’t make sense without understanding the actual facts of the case.

I failed to apply that lesson when I commented on the L’Aquila earthquake a year and a half ago here, when manslaughter charges were first brought against the seismologists.  To me, the charges represented little more than a fundamental misunderstanding of science and an unseemly search for a scapegoat.  When I first heard the charges had resulted in convictions and sentences for six-years’ imprisonment for the scientists, my reaction was one of astonishment and a readiness for wholesale condemnation of the Italian legal system.

But if we apply the same sober thought that I advocate in judging Ms Liebeck, a similar question arises—how, really, can anyone condemn scientists to prison terms for failing to predict an earthquake?  It makes no sense.

The reality is that the scientists were not convicted because of their failure to predict the earthquake.  Rather, they were convicted because of their involvement in misleading information being communicated to the public.

Prior to the earthquake, a series of tremblors had been shaking the town, causing widespread concern that a large earthquake was imminent.  Concerns were exacerbated because laboratory technician Giampaolo Giuliana had appeared on Italian television about a month before, predicting a major earthquake in the area.  He based his prediction on increased levels of radon emission in the area, a measure that has been studied by seismologists but that has produced only inconsistent results.  Italy’s Commissione Nazionale per la Previsone e Prevenzione dei Grand Rischi (“National Commission for the Forecast and Prevention of Serious Risks”) was accordingly convened to assess the danger and to communicate that assessment to the public.

The public was told, in an interview with Bernardo De Bernardinis, one of the members of the commission, that the seismic situation in L’Aquila was “certainly normal” and posed “no danger,” adding that “the scientific community continues to assure me that, to the contrary, it’s a favorable situation because of the continuous discharge of energy.”  When asked whether the appropriate response to the series of tremblors was to sit back and enjoy a glass of wine, De Bernardinis replied, “Absolutely, absolutely a Montepulciano doc.  This seems important.”  The result has been characterized as a “sigh of relief” that propagated through the town:  “It was repeated almost like a manta:  the more tremors, the less danger.”

The circumstances under which the statements were made are complicated by several factors, including a perceived need to respond to the admittedly misleading statements of Giuliana based on radon measurements.  Evidence suggests a decision before the commission meeting was held of the need “to reassure the public” and to “shut up any imbecile,” presumed to be a reference to the technician.  Evidence also suggests that statements made by the seismologist members of the commission during the one-hour meeting itself were more appropriately measured than those delivered to the public:  “It is unlikely that an earthquake like the one in 1703 could occur in the short term, but the possibility cannot be totally excluded” said one member; another noted that “in the seismically active area of L’Aquila, it is not possible to affirm that earthquakes will not occur.”  But the public interview with De Bernardinis was held without the presence of those seismologists, who have subsequently been convicted of manslaughter largely because of their failure to contradict the inaccurate statements that were made in their name as members of the commission.

It is at least somewhat heartening that the convictions were based not on an inability of scientists to predict earthquakes but on the entirely human decisions of what statements to make to the public.  We can even acknowledge that those decisions were at least imperfect and even probably ill-considered.  Still, even with that greater understanding, the convictions remain troubling, with the penalty for errors of judgment seeming all out of proportion.

The Italian judge responsible for the convictions will be releasing full reasons for his decision in a matter of weeks, at which time they can be fully evaluated and criticized.  There is no question his decision will be appealed and that many parties representing the interests of scientists will be put forth.  We can expect these largely to take the form of speaking of the need for the public to have accurate information and for scientists to have the freedom to provide that information without fear of unreasonable prosecution.  Guided by the principle that I think is effectively illustrated by the travesty of second-guessed judgment inflicted on Ms Liebeck, it as appropriate to wait for the judge’s full reasoning before becoming too decisively critical.

(Note:  I have been inactive on this blog for too long.  I don’t like making “New Year’s Resolutions,” but hope to be more productive with it in 2013.  Happy New Year!)

Refusing Victory

A few years ago, I traveled to Pisa.  It was a pilgrimage of sorts since I have always had a fascination with the cathedral’s belltower.  The tower is famous around the world because its construction was flawed almost from the moment it began in 1173 AD.  Even by the time the second floor was completed, it had already begun to sink.

Standing among the tourists, who endlessly posed friends so they would appear to be supporting the leaning tower in photographs, my mind could not help but reflect upon Galileo.  The steps up the tower are well worn from use over the centuries, and I was mindful that Galileo himself had walked them.  And when I was at the top of the tower, I moved to the edge so I could imagine dropping balls of different weights over the edge and watching as they hit the ground at precisely the same time.

The story of Galileo’s famous experiment, in which he was said to have dropped two balls of very different weights over the edge, was a decisive demonstration of the falseness of the Aristotelian theory of gravity, which held that objects would instead fall at a speed in proportion to their mass.  In August, 1971, the experiment was dramatically repeated on the moon by David Scott (the Commander of Apollo 15) using a hammer and a feather.  Video of that demonstration can be found here.  While many historians question whether Galileo ever actually performed the experiment himself, he remains forever associated with the fundamental commitment of science to the consequences of observable data.  He was the one, after all, who pointed his telescope towards Venus and Jupiter to confirm with the evidence of his own eyes that we do not live in a geocentric universe.  The fact that others refused even to do something so simple as to look for themselves could never change the reality of the structure of the solar system in which we exist.

Galileo is often elevated by scientists to the status of hero because his commitment to observation and his challenge to orthodoxy were made dramatically in the face of persecution by the Catholic Church.  The consequences of this status are at times inconvenient and frustrating — every crackpot who invents a perpetual-motion machine imagines himself to be another misunderstood Galileo.  Most of them are not, of course, but the status of hero for Galileo remains well-deserved as a reminder that science must always be prepared to accept challenges.  That is its very strength and is responsible for the amazing strides in knowledge that it has allowed.

Sometimes we forget this.

This week, the Tennessee Senate passed SB 893, a bill that has been widely characterized as allowing the teaching of creationism in schools.  A copy of the text of the bill can be found here.  While I have no doubt that passage of the bill is viewed as a success by the creationist movement — as a step consistent with its infamous “wedge strategy” to gain a small toehold in science classrooms that can grow in the future — I also think the reflexive condemnation of the bill by scientists is ill-advised.  I wrote about this bill a little over a year ago here.

The text of the bill itself is important and relevant.  There is little in it that is objectively troublesome.  It does not mandate that the teaching of creationism be given “equal time” with the teaching of evolution or make any statement that would improperly elevate creationism to the status of science.  Instead, it requires that teachers “be permitted to help students understand, analyze, critique, and review in an objective manner the scientific strengths and scientific weaknesses of existing scientific theories in the course being taught.”  It goes on to assert that it “only protects the teaching of scientific information, and shall not be construed to promote any religious or non-religious doctrine.”  Surely, any scientist must agree that these statements describe the process of scientific inquiry rather well.

Why do scientists not see this for the victory that it is?

Since 1925, when John Scopes was convicted of violating the Butler Act by teaching evolution in school, much has been achieved.  Not only is the teaching of evolution no longer prohibited — although even that was not repealed in Tennessee until 1967 — court decisions have consistently held that creationism is not science (no matter what name it attempts to go by) and that the government cannot compel the teaching of creationism as though it were.  Creationists have instead been reduced to needing to confront science on its own terms:  through objective analysis, critique, and review.

There is also a danger here that has not been fully recognized.  An attempt to suppress even any mention of creationism in school science classes has the potential to set up science as appearing doctrinaire — this is already happening in other areas that are of more direct policy relevance such as global climate change.  But science welcomes objective challenges to its ideas because addressing those challenges makes its ideas more robust.  Our heroes, like Galileo, are those who tenaciously confront orthodoxy.  Creationism remains so widespread a belief in the United States that students are still exposed to it outside a classroom.  Creationists have devised arguments that are sometimes clever and subtle, and that can superficially appear to present genuine scientific challenges to evolution; students deserve to have their questions about how science responds answered.  If those answers are not forthcoming in a science class, where do we expect students to become armed with the critical-thinking tools needed to identify and expose crafty but misleading pseudoscientific arguments?

The debate is no longer one in which we are confronted with legislation that attempts to portray creationism as though it were science.  If such legislation again rears its head in the future, it should be condemned and challenged, using the body of law that has now developed to oppose it.  Instead, the debate has shifted to one in which creationism is to be addressed in terms that science not only welcomes, but thrives on — objective analysis and criticism.  It is a mistake to be overly timid by appearing dogmatically to suppress any dissent to what is increasingly viewed as scientific orthodoxy.  Doing so puts us in the position of failing to exploit the real strength of science:  an acceptance that anything, on fair and objective grounds, can be challenged.

The Last Place on Earth

Antarctica has been in the news quite a lot recently.  We have just passed the 100th anniversary of Amundsen’s and Scott’s attainment of the south pole.  Al Gore recently traveled to the continent as part of his Climate Reality Project.  And, most interestingly, Russian scientists finally pierced the 3.8-km-thick ice shield to penetrate the surface of Lake Vostok.  The project hopes to identify an ecosystem in a lake that has been isolated from the remainder of the planet by the Antarctic ice for some 15 million years.

There are any number of ideas floating around amidst the excitement, mostly centering on the potential for information obtained from Lake Vostok to inform us about patterns of evolution on our own planet and to provide further insight into the possibility that life could evolve on planets or moons having similar conditions.  The Jovian moon Europa, for instance, has an icy crust with a liquid ocean underneath that some astrobiologists have speculated could support life.

Work in Antarctica is highly seasonal, and with the current season now coming to an end, actual collection of water and sediment samples (perhaps using an underwater robot) will not be performed until the Antarctic summer of 2012 – 13.  The Russian research is also likely to be complemented by projects planned for that season by British and American scientists.  The British Antarctic Survey plans to cut through the icecap into Lake Ellsworth while the Americans plan to investigate Lake Whillans.

There is undoubtedly an element of competition among the Russian, British, and American teams that is perhaps reminiscent of the “race for the south pole” between the Norwegian and British teams led by Amundsen and Scott a century ago.  But at the same time, Antarctica is a place where it is easier to set aside national chauvinism in favor of an idealized cooperative approach to science undertaken by a singular humanity.  It is within this context that I want to discuss a question that has a superficially simple answer.

Who owns Lake Vostok?
The easy answer of “no one” is perhaps the answer most commonly given because no national territorial claims are enforced in Antarctica.  But a fuller answer is more complex.  While it is true that no national territorial claims are enforced, that does not mean such claims do not exist.  Indeed, during the early part of the twentieth century, seven nations asserted territorial claims, some of which overlap:  Chile, Argentina, France, Norway, Great Britain, New Zealand, and Australia.  Those claims still exist but have been “frozen” in accordance with a series of agreements that are collectively known as the “Antarctic Treaty System” (who says treaty makers do not have a sense of humor?).

The initial Antarctic Treaty went into effect in June, 1961 and included the United States and the Soviet Union in addition to the seven claimant nations, as well as Belgium, Japan, and South Africa.  While not claimant nations, the United States and the Soviet Union were given special status in Article IV of the treaty as reserving the right to make territorial claims in the future; nations that have subsequently ratified the treaty have agreed not to advance any claims of their own.

All of this continues to be relevant because Antarctica has importance that goes beyond its scientific value.  Fifty percent larger than all of Europe, Antarctica is believed to contain vast stores of mineral resources and — importantly — oil.  The original Antarctic Treaty said nothing about how to treat discoveries of such resources, but the Madrid Protocol, negotiated in 1991, places a 50-year moratorium on mining and oil-exploitation activities in the Antarctic.  That moratorium may be lifted earlier than the 50-year term if there is agreement among certain parties to the treaty.

The original territorial claims, which date back to Britain’s first claim in 1908, were based on traditional legal rationales for asserting sovereignty, including discovery, occupation, geographical proximity, and geographical affinity theories.  Since those territorial claims are merely “frozen” by the Antarctic Treaty System, many of the activities that take place in the Antarctic need to be viewed with a somewhat jaundiced eye.  There is no doubt that the scientific research that takes place is valid and important, but much of the national support of that research is funded with a greater objective of continuing to consolidate territorial claims.

Consider, for example, Emilio Marcos Palma, the first human being born on the continent of Antarctica.  An Argentine national, Palma’s birth was coordinated through the efforts of the Argentinean government as a form of colonization of the territory it claims.  He was born January 7, 1978, and eight years later, the Chilean government followed suit, arranging for the birth of Juan Pablo Camacho in Antarctica.  Both men were born in a part of the continent that is simultaneously claimed by each of Argentina, Chile, and Great Britain.  When I visited Antarctica last month, one of the residents of the British base at Port Lockroy explained to me, with characteristically wry British wit, “The Chilean and Argentinean governments each sent down a pregnant woman to have a baby.  But we Brits … we opened a post office!”  And indeed, the British do operate a post office out of Port Lockroy in that area.  Their greater motivation is almost certainly part of a plan to solidify their “frozen” territorial claim than out of a genuine need to provide postal services — which are almost entirely used by tourists to send postcards to friends and family.

The author enjoying one of his pastimes in Antarctica

Consider also that the United States operates a base at the South Pole (that also provides a post office), simultaneously straddling the territories of six of the seven claimant nations.  It also operates McMurdo Base between the Ross Sea and the Ross Ice Shelf; that base is a veritable small town, having a population of about 1000 in the summer months.  There is no doubt that a consideration in operating these bases is to establish a pattern of colonization that may serve for a future territorial claim by the United States in accordance with its reserved right under the Antarctic Treaty.

The presence of Russian bases in Antarctica is surely no different, and this fact has not escaped the attention of Australia.  Lake Vostok lies within the territory to which Australia has frozen claims, an area that encompasses about 42% of the Antarctic continent and that is almost the size of the Australian continent itself.  (How Australian does “Vostok” really sound, eh, mate?)  About six months ago, the Lowy Institute, a private Australian think tank, raised concerns about Australia’s ability to preserve its territorial claim, and suggested examining the possibility of involving military personnel in its Antarctic activities.  A copy of the paper can be read here.  The suggestion of involving the Australian military is delicate because of limitations imposed by the Antarctic Treaty (naval activity on the high seas is generally permissible but military activity on land or ice shelves is prohibited).

The Antarctic is one of few truly pristine parts of the planet remaining, and it encompasses a satisfyingly large part of the world.  Many idealistically wish that it will always remain so, and the romantic notion that it might has so far been possible because of its extreme inhospitality to human beings.  Lake Vostok, for instance, is near the southern “Pole of Cold,” which boasts the lowest temperatures on the planet, having once recorded a temperature as low as –89.2ºC (–128.6ºF).  But it is unrealistic to believe it will always remain so as technology continues to evolve and the resources that it houses become more potentially accessible and valuable to nations.  The frozen territorial claims are like a bear in hibernation — quiet, peaceful, and slumbering — but spring always eventually comes.

You Get What You Pay For

Be careful about provoking scientists.  They can be persistent.

The issue I’m writing about today was first raised a quarter century ago when Henry Herman (“Heinz”) Barschall published two papers in Physics Today about the cost of academic journals.  He performed analyses of various physics journals at the time, based on some widely used metrics that attempt to quantify the “value” of a journal in terms of its reach and citation frequency.  His analyses showed that according to his criteria, physics journals published by the American Institute of Physics (“AIP”) and by the American Physical Society (“APS”), two not-for-profit academic societies devoted to physics in the United States, were more cost effective than other journals, particularly journals produced by commercial publishers.  Copies of his articles can be found here and here.

His articles prompted a series of litigations in the United States and Europe alleging that they amounted to little more than unfair advertising to promote the academic-society journals that Barschall was associated with.  At the time, he was an editor of Physical Review C, a publication of the APS, and the magazine Physics Today where he published his findings was a publication of the AIP.  (In the interest of disclosure, I was also previously an editor of an APS publication and also recently published a paper in Physics Today.  I have also published papers in commercial journals.)  The academic societies ultimately prevailed in the litigations.  This was relatively easily accomplished in the United States because of the strong First Amendment protection afforded to publication of truthful information, but was also accomplished in Europe after addressing the stronger laws that exist there to prevent comparisons of inequivalent products in advertising.  A summary of information related to the litigations, including trial transcripts and judicial opinions can be found here.

The economics of academic-journal publication are unique, and various journals have at times experimented with different models in order to address the issues that are particular to the publication of academic journals.  They have as their primary function to disseminate research results and to do so with a measure of reliability that is obtained by their use of anonymous peer review.  Despite their importance in generating a research record and in providing information of use to policymakers and others, such journals tend to have a small number of subscribers but relatively large production costs.  Consider, for instance, that Physical Review B, the journal I used to work for and one of the journals that Barschall identified as most cost-effective, currently charges $10,430 / year for an online-only subscription to the largest research institutions, and charges more if print versions are desired.

While commercial journals have generally relied upon subscription fees to pay their costs, academic-society journals have been more likely to keep subscription fees lower by imposing “page charges” that require the authors to pay a portion of the publication costs from their research budgets.  The potential impact on their research budgets has sometimes affected researcher decisions about where to submit their papers.  More recent variations on economic models distinguish among subscribers, charging the highest subscription rates to large institutions where many researchers will benefit from the subscription, and charging lower subscription rates to small institutions and to individuals.  All of these economic models have needed to compete more recently with the growing practice of posting research papers in central online archives, without fee to either researcher or reader.  The reason that traditional journals still exist despite the presence of these online archives is that they provide an imprimatur of quality derived from their use of formalized peer-review considerations that are still relied on by funding bodies and other government agencies.

As reported this weekend in The Economist, Cambridge University mathematician Timothy Gowers wrote a blog post last month that outlined his reasons for boycotting research journals published by the commercial publisher Elsevier.  A copy of his post can be read here.  The post has prompted more than 2700 researchers to sign an online pledge to boycott Elsevier journals by refusing to submit their work to them for consideration, by refusing to referee for them, and by refusing to act as editors for them.  My objective is not to express an opinion whether such a boycott is wise or unwise.  The fact is that journals compete for material to publish and for subscriptions.  While they accordingly attempt to promote distinctions that make them more valuable than other journals, they are also affected by the way their markets respond to those distinctions.

What is instead most interesting to me in considering the legal aspects of this loose boycott is the extent to which it is driven by a general support among commercial publishers for the Research Works Act, a bill that was introduced in the U.S. Congress in December and that would limit “open access” to papers developed from federally funded research.  The Act is similar to acts that have been proposed in 2008 and 2009.  Specifically, it would prevent federal agencies from depositing papers into a publicly accessible online database if they have received “any value-added contribution, including peer review or editing” from a private publisher, even if the research was funded by the public.

The logic against the Act is compelling:  why should the public have to pay twice, once to fund the research itself and again to be able to read the results of the research it paid for?  But this logic is very similar to the arguments raised decades ago against the Bayh-Dole Act, which allowed for patent rights to be vested in researchers who developed inventions with public funds.  The Bayh-Dole Act also requires the public to pay twice, once to fund the research itself and again in the form of higher prices for products that are covered by patents.  Yet by all objective measures, the Bayh-Dole Act has been a resounding success, leading to the commercialization of technology in a far more aggressive way than had been the case when the public was protected from such double payments — and this commercialization has been of enormous social benefit to the public.

The Bayh-Dole Act has been successful because it provides an incentive for the commercialization of inventions that was simply not there before.  This experience should not be too quickly dismissed.  In the abstract, the Research Works Act is probably a good idea because it provides an incentive for publishers to provide quality-control mechanisms in the form of peer review that allows papers to be distinguished from the mass of material now published on the Internet without quality control.  This is worth paying for.  But the Act also has to be considered not in the abstract, but instead in the context of what not-for-profit academic societies are already doing in providing that quality control.

The ultimate question really is:  Are commercial publishers providing a service that needs to be protected so vitally that it makes sense to subject the public to double payment?  The history of the Bayh-Dole Act proves that the intuitive answer of “no” is not necessarily the right one.  But what the commercial publishers still need to prove in convincing the scientific community that the answer is “yes” is, in some respects, no different than the issue Heinz Barschall raised 25 years ago.

Pleading the Fifth

In 1637, John Lilburne was accused with the crime of shipping seditious books into England from Holland.  Forced to appear in the now-infamous Court of Star Chamber — so-named because of the stars painted on the roof of the room in which its proceedings were held — Lilburne refused to swear the “oath ex officio.”  That refusal was intimately related to the ultimate abolition of the Court of Star Chamber by the English Parliament in 1641 and to the development of one of the most important modern legal rights, the right against self-incrimination.

John Lilburne

Derived from ecclesiastical Courts of Inquisition, the oath ex officio compelled an individual to answer all questions put to him truthfully, and was typically administered to those about to be accused of a crime but before being advised of the nature and scope of the charges.  Because it was complete in scope, requiring a truthful answer to any question, it became a convenient mechanism for forcing self-incriminating testimony, and was increasingly abused by the Court of Star Chamber as its proceedings became ever more secret and politically oppressive.

Like so many of the men and women who prompted the development of rights that we (perhaps too complacently) now take for granted, Lilburne was, by all accounts, an argumentative and combative man.  One of his contemporaries commented that “if John Lilburn were the last man in the world, John would fight with Lilburne, and Lilburne would fight with John.”  But this characteristic of steadfast defiance was instrumental in effecting the ultimate abolition of the Court.

After several appearances in which he refused to swear the oath, Lilburn was sentenced to a fine of £500, punishment in the pillory, and imprisonment until he acquiesced.  He was whipped more than 200 times in the street on his way to the pillory, where he harangued the gathering crowds that no one should be forced to accuse himself of a crime.  During the time of his imprisonment, which included at least one four-month period of solitary confinement, he wrote nine pamphlets.  As part of these efforts to compel him to swear the oath — without even yet addressing his alleged crime of importing seditious books — Lilburne spent nearly three years imprisoned.  It was only finally when the Long Parliament met near the end of 1640 and was inspired by a speech delivered by Oliver Cromwell that action was taken to release victims of the Court of Star Chamber’s oppression, including Lilburne.

Lilburne’s example was one of several that served to develop the modern concepts of a presumption of innocence, the right not to incriminate oneself, and the recognition that the refusal to answer an incriminating question carries no implication of guilt.  For hundreds of years now, these have formed part of the bedrock of criminal law and we are all too aware of the potential for oppression by the state when they are relaxed.  In the United States, the right not to incriminate oneself is, of course, enshrouded in the Fifth Amendment of the Bill of Rights:  “No person … shall be compelled in any criminal case to be a witness against himself.”

That right was implicated in a case in Colorado this week that is interesting because of its involvement of modern electronics technology.  The Fifth Amendment right is being asserted by Ramona Fricosu, who was indicted in 2010 on charges arising from allegedly fraudulent real-estate transactions.  As part of its investigation, the government seized, under warrant, six computers from her home, one of which is a laptop whose contents are encrypted.  Suspecting that the encrypted contents contain information relevant to its investigation and likely to be incriminating, the government has sought to compel Fricosu to type her password into the laptop so that the contents can be read.  She has refused.

This week, the U.S. District Court for the District of Colorado ordered that Fricosu supply an unencrypted copy of the contents of her laptop to the government by February 21, 2012, presumably by typing her password into the device.  A copy of the order can be read here.  If Fricosu continues to refuse, she may be punished for contempt of the court’s order.

There is no doubt that modern digital encryption techniques have an important and legitimate function.  The very existence of electronic commerce depends fundamentally on the use of such techniques, and there have been increasing recommendations issued by electronic-security professionals to routinely encrypt the content of electronic devices as a precaution against loss or theft.  Sensitive personal and business information is now commonly stored on such devices, and encryption is the only effective mechanism for preventing unauthorized access to the information.

Fricosu’s case is, in some ways, a difficult one.  This is not an instance in which the government has an inchoate suspicion that there is something incriminating on the laptop, but rather one in which there is sufficient evidence to support the probable cause for issuing a warrant.  It is therefore easy to understand the court’s order and to have sympathy with the desire by police simply to read what they have legitimately seized as part of their investigation into a crime that defrauded others.

But at the same time, I have considerable discomfort with the order precisely because of the Fifth Amendment implications.  It is a mistake to view this case purely through the procedural lens of the Fourth Amendment and to be satisfied that the police have complied with all warrant requirements.  History has taught us of the dangers of vesting the state with the power to compel self-incrimination by individuals.  The potential for oppression is sufficiently great that free societies have determined that it is better not to vest the state with that power.  There may be some cases in which the burden on the government to prove guilt is greater, and some even in which guilty persons are not convicted because the burden is too great to satisfy.

But this is the cost we have agreed to pay for maintaining a barrier against abuses that we have recognized as being too seductive to governments in the past.