Tending Towards Savagery

I am no particular fan of the monarchy, but Prince Charles was given a bad rap in 2003 when he called for the Royal Society to consider the environmental and social risks of nanotechnology.  “My first gentle attempt to draw the subject to wider attention resulted in ‘Prince fears grey goo nightmare’ headlines,” he lamented in 2004.  Indeed, while yet somewhat misguided, the Prince’s efforts to draw attention to these issues were genuine and not far from mainstream perceptions that scientists sometimes become so absorbed with their discoveries that they pursue them without sober regard for the potential consequences.  A copy of his article can be read here, in which he claims never to have used the expression “grey goo,” and in which he makes a reasonable plea to “consider seriously those features that concern non-specialists and not just dismiss those concerns as ill-informed or Luddite.” 

It is unfortunate that the term “grey goo” has becomes as inexorably linked with nanotechnology as the term “frankenfood” has become associated with food derived from genetically modified organisms.  The term has its origins in K. Eric Drexler’s 1986 book Engines of Creation

[A]ssembler-based replicators will therefore be able to do all that life can, and more.  From an evolutionary point of view, this poses an obvious threat to otters, people, cacti, and ferns — to the rich fabric of the biosphere and all that we prize…. 

“Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage.  Tough, omnivorous “bacteria” could out-compete real bacteria:  they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days.  Dangerous replicators could easily be too tough, small, and rapidly spreading to stop…. 

Among the congoscenti of nanotechnology, this threat has become known as the “gray goo problem.” 

Even at the time, most scientists largely dismissed Drexler’s description as unrealistic, fanciful, and needlessly alarmist.  The debate most famously culminated in a series of exchanges in 2003 in Chemical and Engineering News between Drexler and Nobel laureate Richard Smalley, whose forceful admonition was applauded by many: 

You and people around you have scared our children.  I don’t expect you to stop, but I hope others in the chemical community will join with me in turning on the light and showing our children that, while our future in the real world will be challenging and there are real risks, there will be no such monster as the self-replicating mechanical nanobot of your dreams.

 Drexler did, in the end, recant, conceding in 2004 that “[t]he popular version of the grey-goo idea seems to be that nanotechnology is dangerous because it means building tiny self-replicating robots that could accidentally run away, multiply, and eat the world.  But there’s no need to build anything remotely resembling a runaway replicator, which would be a pointless and difficult engineering task….  This makes fears of accidental runaway replication … quite obsolete.”  But too many others have failed to take note, as sadly highlighted by this month’s bombing of two Mexican professors who work on nanotechnology research. 

Responsibility for the most recent bombings, as well as other bombings in April and May, has been claimed by “Individualidades tendiendo a lo Salvaje” (roughly translated into English as “Individuals Tending Towards Savagery”), an antitechnology group that candidly claims Unabomber Ted Kaczynski as its inspiration.  The group even has its own manifesto.  It is not as long as the Unabomber’s but is equally contorted in attempting to justify the use of violence as a means of opposing technological progress.  A copy of the original manifesto can be read here and an English translation can be found here

The manifesto references Drexler when it cites the absurd rationale for the group’s violence: 

[Drexler] has mentioned … the possible spread of a grey goo caused by billions of nanoparticles self-replicating themselves voluntarily and uncontrollably throughout the world, destroying the biosphere and completely eliminating all animal, plant, and human life on this planet.  The conclusion of technological advancement will be pathetic, Earth and all those on it will have become a large gray mass, where intelligent nanomachines reign.

 No clear-thinking person supports the group’s use of violence.  But at the same time, there are many nonscientists who are suspicious of the motivations that underlie much of scientific research.  One need only look at the recent news to understand the source of that distrust:  just this week, the Presidential Panel for the Study of Bioethical Issues released a report detailing atrocities commited by American scientists in the 1940’s that involved the nonconsensual infection of some 1300 Guatemalans with syphilis, gonorrhea, or chancroid.  There are many other examples where scientists have engaged in questionable practices with a secrecy that is counter to the very precepts of scientific investigation. 

“Nanotechnology” is a wonderful set of technologies that have already found their way into more than 1000 commercial products being sold in the electronic, medical, cosmetics, and other markets.  But even though the use of nanotechnology is spreading, many remain concerned that it is unwise to allow it, even if they would not go so far as to bomb the scientists working on the technology.  Here I find myself sympathetic with the real message that Prince Charles was attempting to spread — namely, that the concerns of the nonscientist public need to be addressed, even if those concerns seem to be ill-conceived.

Eyes Closed Forever in the Sleeping Death

The name of Henry Labouchere is certainly unfamiliar to most, and yet he played a role in one of the great travesties of the 20th century.  I realize that it can be unfair to judge acts of history through the lens of modern morality, but I am going to do so anyways.  In this instance, it is justified. 

When the United Kingdom was amending its criminal laws in 1885, the major thrust of the revisions was to expand the protection of women in an era when they lacked power in any number of respects.  It was an era in which women were not permitted to vote and one in which the age of consent for sex was a mere thirteen, setting forth only misdemeanor penalties for those who would have sex with a girl between the ages of 10 and 12.  The amendments did any number of things that most today would recognize as good and responsible:  they raised the age of consent to 16 and introduced a number of provisions designed to curb the practice of abducting or otherwise procuring young, impoverished girls for prostitution.  The Labouchere amendment, added quietly to the bill at the last minute, did something quite unrelated:  It criminalized almost all homosexual behavior between men. 

The Labouchere Amendment:  Any male person who, in public or private, commits, or is a party to the commission of, or procures, or attempts to procure the commission by any male person of, any act of gross indecency shall be guilty of a misdemeanour, and being convicted shall be liable at the discretion of the Court to be imprisoned for any term not exceeding two years, with or without hard labour.

Sixty-seven years later, as part of reporting the burglary of his home by a friend of his lover, a man confessed to police that he was having a sexual relationship with another man.  He was convicted under the Labouchere Amendment and given the choice of a year’s imprisonment or probation on the condition that he undergo chemical castration.  He was to take female-hormone injections every week for a year, resulting in a humiliating feminization of his body.  “They’ve given me breasts,” he complained to a friend.  He had once run a marathon in a time that was only 21 minutes shy of the world record, and had one of the most brilliant scientific minds of the 20th century.

The man, of course, was Alan Turing, whose work as a cryptographer at Bletchley Park during World War II was, in no small measure, instrumental to the ultimate victory by allied forces.  His most important contribution was certainly development of the initial design of the “bombe,” an electromechanical device that allowed the British to determine settings on the German Enigma machines and that altered the flow of vital naval intelligence information.  While some generously consider the bombe to have been the world’s first computer, it was not programmable in the way we normally think of computers.  But it was there too that Turing had an impact, expanding on theoretical ideas he developed before the war to build some of the earliest programmable computers.  He is often called “the father of the computer” and his “Turing test” to evaluate the apparent intelligence of computers remains of fundamental importance in fields of artificial intelligence.

There is no question that Turing was eccentric and that he was socially different from others.  But when his country owed him a debt of gratitude for his impact in changing the course of a war and for his role in establishing one of the pillars of modern society, it instead convicted him for his personal and private activities, shaming him with a horrible demasculinizing of his body.  It removed his security clearance and banned him from continuing his consultant work with the British intelligence agencies. 

One of Turing’s eccentricities was his peculiar fascination with the tale of Snow White.  When he was found dead at the age of 42, it was beside an apple that had been dipped in cyanide and from which several bites had been taken.  Few doubt that Turing, sickened by what his government had done to his body, deliberately poisoned the apple and ate it as his method of committing suicide.

Dip the apple in the brew,
Let the sleeping death seep through,
Look at the skin,
A symbol of what lies within,
Now turn red to tempt Snow White,
To make her hunger for a bite,
(It’s not for you, it’s for Snow White)
When she breaks the tender peel,
To taste the apple from my hand,
Her breath will still, her blood congeal,
Then I’ll be the Fairest in the Land.

                                               Snow White (Disney, 1938)

June 23 is the 99th anniversary of Alan Turing’s birth.  A year from now, I expect that there will be any number of articles written about him and about the achievements he produced in his tragically shortened life.  We can only speculate what other accomplishments that awaited him the world was denied.  In my own way, I want to recognize his eccentricity by commemorating him a year early.

It is a sad testament on our humanity that we have in the past, continue today, and undoubtedly will in the future, misuse the power of the law to punish others for the simple crime of failing to conform.  But how much is that nonconformity itself responsible for the vision that men like Turing had in being able to see things the rest of us are blind to?  Why don’t we celebrate the gifts of that diversity instead?

A Mean Act of Revenge Upon Lifeless Clay

Jack Kevorkian died today, and many are commenting about his role in the “right to die” movement.  While I am a supporter of the movement generally, I did not find Kevorkian to be a courageous man.  His actions had a significant detrimental impact on the efforts of others to provide ways for physicians to aid the terminally ill to end their lives on their own terms and with dignity. 

Consider for a moment the case of Diane, and imagine the circumstances she found herself in.  She had been raised by an alcoholic family when she was a child and had suffered a great number of torments in her life, including vaginal cancer as a young woman, clinical depression, and her own alcoholism.  When her physician diagnosed her with myelomonocytic leukemia, she was presented with the options:  She could proceed without treatment and survive for a few weeks or perhaps even a few months if she was lucky, but the last days of her life would surely be spent in pain and without dignity; it was not how she wanted her friends and family to remember her.  If she accepted the treatment her doctor had discussed, there was a 25% change of long-term survival, but the treatment itself — chemotherapy, bone marrow transplantation, irradiation — would also rob her of much of what she valued about life, and would likely result in as much pain as doing nothing.  For her, the 25% chance that such treatment would succeed was not worth it.  Others might have differed in their assessment, but this was hers. 

Neither option presented to her — let the disease run its course or accept a treatment she had rejected — was acceptable, and so she considered the unspoken alternative.  Diane’s physician told her of the Hemlock Society, even knowing that he could be subject to criminal prosecution and professional review, potentially losing his license to practice medicine.  But by having a physician who knew her involved in her decision, her mental state could be assessed to ensure that it was well-considered and not a result of overwhelming despair.  Her physician could explain how to use the drugs he prescribed — ostensibly to help her sleep — so that until the time came, she could live her life with confidence that she had control over when to end it.  She could enjoy the short time she had remaining without being haunted by fears that it would be ineffective or result in any number of consequences she did not want.  In the end, Diane died alone, without her husband or her son at her side, and without her physician there.  She did it alone so that she could protect all of them, but died in the way that she herself chose. 

The story of Diane is one that her physician, Dr. Timothy Quill, published in the New England Journal of Medicine in 1991.  A copy of it can be found here.  It was one of the first public accounts of a physician acknowledging that he had aided a patient in taking her own life.  It was to prompt a debate about the role of physicians at the end of life, and a subsequent study published by the same journal in 1996 found that about 20% of physicians in the United States had knowingly and intentionally prescribed medication to hasten their patients’ deaths. 

But the quiet, thoughtful, and sober approach adopted by Quill and many other physicians to the issue of physician-assisted suicide was very much derailed by the grandstanding antics of Kevorkian.  His theatrical flouting of the law, prompting law-enforcement agencies to act in making an example of him rather than seriously considering the merits of his views, were counterproductive to the medical debate. 

Kevorkian’s fascination with death was long part of his life.  He was not, as many believe, christened with the nickname “Dr. Death” because of his efforts promoting physician-assisted suicide.  That happened long before, during the 1950’s shortly after receiving his medical degree.  While a resident at the University of Michigan hospital, he photographed the eyes of terminally ill patients, ostensibly to identify the actual moment of death as a diagnostic method, but more truly “because it was interesting [and] a taboo subject.”  Later, he presented a paper to the American Association for the Advancement of Science advocating “terminal human experimentation” on condemned convicts before they were executed.  Another of his proposals was to euthanize death-row inmates so that their organs could be harvested for transplantation. 

His views have politely been described as “controversial,” but are perhaps more accurately considered gruesome and bizarre, such as his experiments aimed at transfusing blood from corpses into injured soldiers when other sources of blood were unavailable.  The result of his various investigations was considerable professional damage, causing him to resign or be dismissed from a number of medical centers and hospitals.  His own clinic failed as a business.  For all his current notoriety, Kevorkian was throughout his career considered very much an outsider to the mainstream medical-science community. 

In considering the legacy of Kevorkian, it is important to recognize the long history of the debate over physician-assisted suicide, which dates at least from the days of ancient Greece and Rome.  The modern debate in the United States has its origins in the development of modern anaesthesia.  The first surgeon to use ether as an anaesthetic, J.C. Warren, suggested it could be used “in mitigating the agonies of death.”  In 1870, the nonphysician Samuel D. Williams suggested the use of chloroform and other medications not just to relieve the pain of dying, but to spare a patient that pain completely by ending his life.  Although the proposal was made by a relatively obscure person, it attracted attention, being quoted and discussed in prominent journals and prompting significant discussion within the medical profession.  The various discussions culminated in a formal attempt to legalize physician-assisted suicide in Ohio in 1906, although the act was rejected by the legislature in a vote of 79 to 23. 

Today, there are three states that have legalized the practice of physician-assisted suicide — Oregon, Washington, and Montana.  The history of how that legislation came to pass, and the various court challenges that have been raised, is fascinating in its own right.  For now, suffice it to say that my own view is that those states legalized the practice because of the courageous efforts of physicians who are largely unknown, not because of the actions of Kevorkian.  Indeed their courage is all the greater that they achieved as much as they did despite his activities.

Is Your Scientific Malpractice Insurance Paid Up?

“Thanks to his intuition as a brilliant physicist and by relying on different arguments, Galileo, who practically invented the experimental method, understood why only the sun could function as the centre of the world, as it was then known, that is to say, as a planetary system.  The error of the theologians of the time, when they maintained the centrality of the Earth, was to think that our understanding of the physical world’s structure was, in some way, imposed by the literal sense of Sacred Scripture.”

 Pope John Paul II, November 4, 1992

 

Pope John Paul II did for the Catholic Church in 1992 what scientists do every single day in their professional lives:  admit to a mistake in understanding the nature of the universe.  Scientists do it because it is a fundamental part of the scientific method to acknowledge the failings in our understanding of the world, and because of our collective commitment to improving that understanding by refusing to become doctrinaire.  A scientist gains no higher respect from his peers than when he tells them he was mistaken and goes on to share what he has learned from that mistake so that they may continue the advance of knowledge.  It is this fundamental pillar of the scientific method that has single-handedly been responsible for its tremendous and astonishing successes. 

As the pope noted in his statement, Galileo was one of those who were responsible for devising such a brutal and uncompromising commitment to the evidence of our own eyes and ears in drawing conclusions about the world.  For this, he was condemned by the Church, sentenced to live under house arrest at his farmhouse in Arcetri, where he would have little to do other than grow blind and die.  It would not be until 1835 — more than 200 years after his conviction — that the Vatican would remove his Dialogue Concerning the Two Chief World Systems from its list of banned books and not until 1992 — more than 350 years after his conviction — that it would formally admit that it was wrong and Galileo was right.  (Some additional commentary that I have previously made about Galileo can be found here.) 

I find it unfortunate that it is again in Italy that ridiculous persecution of scientists is taking place.  It is not the Church this time, but rather the Italian state that is trying to hold scientists to a standard that fails to recognize the fundamental character of the scientific method.  On April 6, 2009, an earthquake struck Italy in the Abruzzo region, resulting in the death of more than 300 people and damaging thousands of building.  About 65,000 lost their homes and most of those were forced to live for weeks in makeshift “tendopoli” — tent cities — that were erected to house the quake refugees, a sad circumstance that Prime Minister Silvio Berlusconi thoughtlessly suggested was an opportunity for them to enjoy a “camping weekend.” 

The region had been experiencing Earth tremors for more than ten weeks in advance of the earthquake, and on March 30, a 4.0-magnitude earthquake struck the region.  There was concern among the public that a larger earthquake would follow, as indeed it did a week later.  A meeting of the Major Risks Committee, which provides advice to the Italian Civil Protection Agency on the risks of natural disasters was held on March 31.  Minutes from the meeting show that the following statements were made about the possibility of a major earthquake in Abruzzo:  “A major earthquake in the area is unlikely but cannot be ruled out”; “in recent times some recent earthquakes have been preceded by minor shocks days or weeks beforehand, but an the other hand many seismic swarms did not result in a major event”; “because L’Aquila is in a high-risk zone it is impossible to say with certainty that there will be no large earthquake”; “there is no reason to believe that a swarm of minor events is a sure predictor of a major shock” — all the sorts of cautious statements by scientists trying to place their understanding of the real risk in context of what they know about seismology and what they do not. 

But at a press conference later held by Bernardo De Bernardinis, a government official who was the deputy technical head of the Civil Protection Agency, reporters were told that “the scientific community tells us there is no danger, because there is an ongoing discharge of energy.”  The idea that small seismic events “release energy,” like letting a bit of steam out of a pressure cooker, is one that is soundly rejected by seismologists; the Earth does not function that way. 

The bizarre aftermath has been the bringing of charges of manslaughter against De Bernardinis and six seismologist members of the Major Risks Committee for their failure to properly warn the public of the danger.  The charges were brought almost a year ago, but a preliminary hearing was not held until last week because of delays resulting from requests by dozens of those damaged by the earthquake to receive civil compensation from the accused scientists.  Astonishingly, the result of the hearing was not an outright dismissal of the homicide charges, but instead a decision to proceed with a trial that will begin on September 20. 

To my mind, this case is an absurd attack on scientists, demanding an infallibility from them that they never claim.  As one of the indicted seismologists noted, there are hundreds of seismic shocks in Italy every year:  “If we were to alert the population every time, we would probably be indicted for unjustified alarm.”  These scientists face not only potential incarceration for twelve years if they are convicted of manslaughter, but also potential civil liability for property damage resulting from the earthquake.  The fact that this possibility is even being entertained is alarming:  It is likely to have a detrimental effect on the kinds of information scientists are willing to share with the public.  And if there is a realistic potential for civil liability arising from the kinds of statements that scientists routinely make, it may indeed make sense for scientists to seek malpractice insurance.  The very idea, though, that scientific research should be haunted by the threat of legal liability in the way that medicine is already, is more than troubling.

No Nation Was Ever Ruined By Trade

“Canada is a country whose main exports are hockey players and cold fronts.  Our main imports are baseball players and acid rain.”

                                                                                        Pierre Elliott Trudeau

 

One of the accusations frequently leveled at environmentalists is that they are, much like meteorologists, hopelessly fickle.  People remember widespread reports in the 1970’s about the possibility of global cooling and the potential imminent onset of another ice age, when now all the talk is about global warming.  Or they recall something of Paul Ehrlich’s dire predictions that agricultural production would be incapable of supporting the world’s population, which they watched grow by more than a factor of two in concert with the development of an obesity epidemic.  Or they remember how the controversy over acid rain became such an issue between the United States and Canada, so jeopardizing the Canada – U.S. Free Trade Agreement that Prime Minister Brian Mulroney cynically wondered whether it would be necessary to go to war with the United States over the issue. 

No one talks about acid rain these days, at least not the way they used to.  But what changed? 

The impression that many in the public seem to have is that acid rain became an issue in the early 1980’s, when images of dying forests and lakes were widely circulated, and then withered away as climatologists shifted their focus to other issues.  The reality is, of course, very different.  Ever since the dawn of the Industrial Revolution, the effects of acidity in precipitation have been noted, with the term “acid rain” being coined by Robert Angus Smith in 1872.  It is associated with the emission of sulfur-, nitrogen-, and carbon-containing gases as byproducts of industrial processes that produce acidic compounds when they react with water.  And the reason it is not discussed as widely as it once was is not because the issue mysteriously vanished or because climatologists are opportunistically fickle, but because actions were taken to reduce its impact.

It was George H. W. Bush who had pledged to become the “environmental president” and who in 1990 supported what was then an innovative approach to reducing targeted emissions.  The basic idea was one  that had been studied theoretically by economists and which attempted to adapt market mechanisms as an indirect form of regulation.  Rather than dictate through strict regulation how emissions should be reduced, the Clean Air Act was amended to put those market mechanisms in place by establishing what has since become known as a “cap and trade” system.  The basic idea was to limit the aggregate sulfur dioxide emissions from different sources, but to permit allowances to be traded so that the market would be involved in determining which sources were permitted to produce emissions within the limits and at what levels.  There were many criticisms of the approach, most notably from environmentalists who fretted that it allowed large polluters to flex their economic muscle in buying permission to pollute. 

But the program is largely acknowledged to have been a success, not only achieving full compliance in reducing sulfur dioxide emissions but actually resulting in emissions that were 22% lower than mandated levels during the first phase of the program.  This was also achieved at a significantly lower cost than had been estimated, with actual costs now determined to be about 20 – 30% of what had been forecast.  The annual cost of having companies figure out for themselves how to reduce acid-rain emissions has been estimated at about $3 billion, contrasted with an estimated benefit of about $122 billion in avoided death and illness, and healthier forests and lakes.  

The success of the acid-rain program is naturally being considered as a way of addressing carbon emissions that are associated with global climate change.  Thus far, the United States has rejected a national implementation of cap-and-trade for carbon emissions, causing California to decide to implement it itself in accordance with its Assembly Bill 32, a copy of which can be found here.  Signed by Governor Schwarzenegger in 2006, the bill requires California to reduce that state’s carbon emissions by 2020 to levels that existed in 1990.  A copy of California’s plan to do so using an implementation of cap-and-trade can be found here

Part of what California seeks to do is to improve on a generally failed cap-and-trade program in Europe that began in 2005.  One of the more significant problems with the European implementation was that governments began the program with an inadequate understanding of the level of carbon emissions in their countries.  Too many allowances were issued, causing market forces quickly to force the price of carbon to zero by 2007.  In addition, a number of tax-fraud schemes and a recent theft of carbon credits stored in the Czech Republic registry have resulted in justifiable concern about the program that some worry will affect the California program. 

It is no surprise that the California program has been the subject of litigation, and last week a ruling was issued by the Superior Court in San Francisco agreeing that alternatives to a carbon-market program had not been sufficiently analyzed.  A copy of the ruling can be read here and a copy of the (much more informative) earlier Statement of Decision can be read here

There is considerable interest in the California program.  It is decidedly more ambitious than the more limited program implemented by ten states in the northeastern region of the United States and is being considered by some Canadian provinces as well as by some South American countries.  While last week’s decision certainly derails implementation of cap-and-trade in California temporarily, it is difficult to imagine that it will not ultimately be implemented after deficiencies in the studies have been addressed.  There is too much interest in it as a regulatory scheme that can have less adverse economic impact than other forms of regulation even while achieving the same overall objectives.