A Thing of Immortal Make

Today, the President signed the America Invents Act, bringing about the most significant changes to U.S. patent law in more than half a century.  While most commentary centers around the shift by the U.S. to join the rest of the world’s “first to file” system — in which priority for a patent goes to the one who wins the race to the Patent Office instead of the one able to prove he invented something first — I want to focus on a more obscure provision of the Act.

Bear with me while I begin with Greek mythology.  Homer described the Chimera in the Iliad:  “a thing of immortal make, not human, lion-fronted and snake behind, a goat in the middle, and snorting out the breath of the terrible flame of bright fire.”  A monstrous creature, the sight of which foretold any variety of natural disasters, the Chimera was ultimately defeated by Bellerophon, who shot her from the winged horse Pegasus.

One of the achievements of modern biological research has been the ability to create fusions of different organisms by combining embryos from different species — “interspecies chimeras.”  The cells intermix and the organism continues to include cells from different species as it grows.  One of the more famous interspecies chimeras is the “geep,” an organism created in 1984 by scientists who fused a sheep embryo with a goat embryo, and which successfully grew to adulthood.  By any measure, the geep is a peculiar-looking creature, with portions of its skin covered in hair (that grew from the goat embryo) and portions covered in wool (that grew from the sheep embryo).  Ever since their creation, many have debated whether their legitimate scientific value outweighs the very common initial reaction that they are too bizarrely unnatural.

The World's First Geep

Shortly after the cloning of Dolly the sheep in 1997, when public attention was focused on the ability of biologists to circumvent natural processes in the creation of lifeforms, Stuart Newman, a professor at New York Medical College in Valhalla, New York, submitted a patent application for a human-nonhuman chimera.  A copy of the application can be found here and makes for interesting reading (perhaps unusually so for a patent application).  Dr. Newman has always been clear on his motivations for filing such a patent application, asserting that he never had any intention of producing humanzees, bahumans, or any other type of human-nonhuman chimera.  Rather, he was concerned by the legal environment in which the Supreme Court appeared to be giving real effect to the desire expressed by Congress in 1952 that a patent be available for the invention of “anything under the sun that is made by man.”  Indeed, I commented several months ago here that roughly 12% of each of our bodies is estimated to be subject to some form of patent coveage.

Dr. Newman’s motivations in wishing to provoke a thorough consideration of the merits of allowing patents on scientifically engineered metahumans are best expressed with his own words:

As a scientist who came of age in the 1960s, I had witnessed the damage that could be wrought by using the products of research and technology without appropriate constraints.  The list is long….  My objective in filing the application was to help alert a wider public to what was coming down the road in terms of human applications of developmental biology.  In a society with democratic values it should be inarguable that those who pay for scientific research and will eventually experience its effects should be informed of what is in store while there is still a chance to discuss its objectives and influence its course.  As a researcher myself, moreover, I was not oblivious to the possibility of a backlash against my field if it was seen to have violated the social trust.

Dr. Newman’s expectations about the line of biological development in this area were correct.  In 2003, the first human-nonhuman chimera was created by Chinese scientists, in that instance between humans and rabbits (embryos were allowed to develop only for several days before being destroyed).  This has been followed by creations of chimeric human-sheep, human-pig, human-mouse, and other human-nonhuman embryos.

Dr. Newman’s patent application was never granted.  Indeed, it precipitated the very debate he hoped it would.  In April, 1998, Commissioner of Patents Bruce Lehman took the highly unusual (and perhaps legitimately criticized) step of announcing to the public that the application would never be allowed, disdaining it as an attempt to patent “half-human monsters.”  That has been the effective policy of the Patent Office ever since, which has asserted that “[i]f the broadest reasonable interpretation of the claimed invention as a whole encompasses a human being, then a rejection … must be made.”  Indeed, since 2004, this policy has received a measured support from Congress, which has passed the so-called Weldon Amendment every year as a rider to the Commerce, Justice, and Science Appropriations bills:  “None of the funds appropriated or otherwise made available under this Act may be used to issue patents on claims directed to or encompassing a human organism.”

But the Weldon Amendment has been limited to the channeling of federal research funds.  Today, with enactment of the America Invents Act, the U.S. government has gone farther by declaring that “[n]otwithstanding any other provision of law, no patent may issue on a claim directed to or encompassing a human organism.”  It is worth noting that this is generally consistent with the approach taken by other countries, and that this provision of the Act removes not only human-nonhuman chimeras from being patentable, but also affects other areas of research involving human embryos and fetuses.  While the Act does not make it unlawful for researchers to investigate human-nonhuman chimeras, the result remains important since it removes one of the primary legal mechanisms that would make such research profitable.

The last time Congress overhauled the patent system in 1952, it proudly and poetically declared its intention to allow patents on “anything under the sun that is made by man.”  Today, it limits that a little, but in a way that few find objectionable.

Tending Towards Savagery

I am no particular fan of the monarchy, but Prince Charles was given a bad rap in 2003 when he called for the Royal Society to consider the environmental and social risks of nanotechnology.  “My first gentle attempt to draw the subject to wider attention resulted in ‘Prince fears grey goo nightmare’ headlines,” he lamented in 2004.  Indeed, while yet somewhat misguided, the Prince’s efforts to draw attention to these issues were genuine and not far from mainstream perceptions that scientists sometimes become so absorbed with their discoveries that they pursue them without sober regard for the potential consequences.  A copy of his article can be read here, in which he claims never to have used the expression “grey goo,” and in which he makes a reasonable plea to “consider seriously those features that concern non-specialists and not just dismiss those concerns as ill-informed or Luddite.” 

It is unfortunate that the term “grey goo” has becomes as inexorably linked with nanotechnology as the term “frankenfood” has become associated with food derived from genetically modified organisms.  The term has its origins in K. Eric Drexler’s 1986 book Engines of Creation

[A]ssembler-based replicators will therefore be able to do all that life can, and more.  From an evolutionary point of view, this poses an obvious threat to otters, people, cacti, and ferns — to the rich fabric of the biosphere and all that we prize…. 

“Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage.  Tough, omnivorous “bacteria” could out-compete real bacteria:  they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days.  Dangerous replicators could easily be too tough, small, and rapidly spreading to stop…. 

Among the congoscenti of nanotechnology, this threat has become known as the “gray goo problem.” 

Even at the time, most scientists largely dismissed Drexler’s description as unrealistic, fanciful, and needlessly alarmist.  The debate most famously culminated in a series of exchanges in 2003 in Chemical and Engineering News between Drexler and Nobel laureate Richard Smalley, whose forceful admonition was applauded by many: 

You and people around you have scared our children.  I don’t expect you to stop, but I hope others in the chemical community will join with me in turning on the light and showing our children that, while our future in the real world will be challenging and there are real risks, there will be no such monster as the self-replicating mechanical nanobot of your dreams.

 Drexler did, in the end, recant, conceding in 2004 that “[t]he popular version of the grey-goo idea seems to be that nanotechnology is dangerous because it means building tiny self-replicating robots that could accidentally run away, multiply, and eat the world.  But there’s no need to build anything remotely resembling a runaway replicator, which would be a pointless and difficult engineering task….  This makes fears of accidental runaway replication … quite obsolete.”  But too many others have failed to take note, as sadly highlighted by this month’s bombing of two Mexican professors who work on nanotechnology research. 

Responsibility for the most recent bombings, as well as other bombings in April and May, has been claimed by “Individualidades tendiendo a lo Salvaje” (roughly translated into English as “Individuals Tending Towards Savagery”), an antitechnology group that candidly claims Unabomber Ted Kaczynski as its inspiration.  The group even has its own manifesto.  It is not as long as the Unabomber’s but is equally contorted in attempting to justify the use of violence as a means of opposing technological progress.  A copy of the original manifesto can be read here and an English translation can be found here

The manifesto references Drexler when it cites the absurd rationale for the group’s violence: 

[Drexler] has mentioned … the possible spread of a grey goo caused by billions of nanoparticles self-replicating themselves voluntarily and uncontrollably throughout the world, destroying the biosphere and completely eliminating all animal, plant, and human life on this planet.  The conclusion of technological advancement will be pathetic, Earth and all those on it will have become a large gray mass, where intelligent nanomachines reign.

 No clear-thinking person supports the group’s use of violence.  But at the same time, there are many nonscientists who are suspicious of the motivations that underlie much of scientific research.  One need only look at the recent news to understand the source of that distrust:  just this week, the Presidential Panel for the Study of Bioethical Issues released a report detailing atrocities commited by American scientists in the 1940’s that involved the nonconsensual infection of some 1300 Guatemalans with syphilis, gonorrhea, or chancroid.  There are many other examples where scientists have engaged in questionable practices with a secrecy that is counter to the very precepts of scientific investigation. 

“Nanotechnology” is a wonderful set of technologies that have already found their way into more than 1000 commercial products being sold in the electronic, medical, cosmetics, and other markets.  But even though the use of nanotechnology is spreading, many remain concerned that it is unwise to allow it, even if they would not go so far as to bomb the scientists working on the technology.  Here I find myself sympathetic with the real message that Prince Charles was attempting to spread — namely, that the concerns of the nonscientist public need to be addressed, even if those concerns seem to be ill-conceived.

Eyes Closed Forever in the Sleeping Death

The name of Henry Labouchere is certainly unfamiliar to most, and yet he played a role in one of the great travesties of the 20th century.  I realize that it can be unfair to judge acts of history through the lens of modern morality, but I am going to do so anyways.  In this instance, it is justified. 

When the United Kingdom was amending its criminal laws in 1885, the major thrust of the revisions was to expand the protection of women in an era when they lacked power in any number of respects.  It was an era in which women were not permitted to vote and one in which the age of consent for sex was a mere thirteen, setting forth only misdemeanor penalties for those who would have sex with a girl between the ages of 10 and 12.  The amendments did any number of things that most today would recognize as good and responsible:  they raised the age of consent to 16 and introduced a number of provisions designed to curb the practice of abducting or otherwise procuring young, impoverished girls for prostitution.  The Labouchere amendment, added quietly to the bill at the last minute, did something quite unrelated:  It criminalized almost all homosexual behavior between men. 

The Labouchere Amendment:  Any male person who, in public or private, commits, or is a party to the commission of, or procures, or attempts to procure the commission by any male person of, any act of gross indecency shall be guilty of a misdemeanour, and being convicted shall be liable at the discretion of the Court to be imprisoned for any term not exceeding two years, with or without hard labour.

Sixty-seven years later, as part of reporting the burglary of his home by a friend of his lover, a man confessed to police that he was having a sexual relationship with another man.  He was convicted under the Labouchere Amendment and given the choice of a year’s imprisonment or probation on the condition that he undergo chemical castration.  He was to take female-hormone injections every week for a year, resulting in a humiliating feminization of his body.  “They’ve given me breasts,” he complained to a friend.  He had once run a marathon in a time that was only 21 minutes shy of the world record, and had one of the most brilliant scientific minds of the 20th century.

The man, of course, was Alan Turing, whose work as a cryptographer at Bletchley Park during World War II was, in no small measure, instrumental to the ultimate victory by allied forces.  His most important contribution was certainly development of the initial design of the “bombe,” an electromechanical device that allowed the British to determine settings on the German Enigma machines and that altered the flow of vital naval intelligence information.  While some generously consider the bombe to have been the world’s first computer, it was not programmable in the way we normally think of computers.  But it was there too that Turing had an impact, expanding on theoretical ideas he developed before the war to build some of the earliest programmable computers.  He is often called “the father of the computer” and his “Turing test” to evaluate the apparent intelligence of computers remains of fundamental importance in fields of artificial intelligence.

There is no question that Turing was eccentric and that he was socially different from others.  But when his country owed him a debt of gratitude for his impact in changing the course of a war and for his role in establishing one of the pillars of modern society, it instead convicted him for his personal and private activities, shaming him with a horrible demasculinizing of his body.  It removed his security clearance and banned him from continuing his consultant work with the British intelligence agencies. 

One of Turing’s eccentricities was his peculiar fascination with the tale of Snow White.  When he was found dead at the age of 42, it was beside an apple that had been dipped in cyanide and from which several bites had been taken.  Few doubt that Turing, sickened by what his government had done to his body, deliberately poisoned the apple and ate it as his method of committing suicide.

Dip the apple in the brew,
Let the sleeping death seep through,
Look at the skin,
A symbol of what lies within,
Now turn red to tempt Snow White,
To make her hunger for a bite,
(It’s not for you, it’s for Snow White)
When she breaks the tender peel,
To taste the apple from my hand,
Her breath will still, her blood congeal,
Then I’ll be the Fairest in the Land.

                                               Snow White (Disney, 1938)

June 23 is the 99th anniversary of Alan Turing’s birth.  A year from now, I expect that there will be any number of articles written about him and about the achievements he produced in his tragically shortened life.  We can only speculate what other accomplishments that awaited him the world was denied.  In my own way, I want to recognize his eccentricity by commemorating him a year early.

It is a sad testament on our humanity that we have in the past, continue today, and undoubtedly will in the future, misuse the power of the law to punish others for the simple crime of failing to conform.  But how much is that nonconformity itself responsible for the vision that men like Turing had in being able to see things the rest of us are blind to?  Why don’t we celebrate the gifts of that diversity instead?

A Mean Act of Revenge Upon Lifeless Clay

Jack Kevorkian died today, and many are commenting about his role in the “right to die” movement.  While I am a supporter of the movement generally, I did not find Kevorkian to be a courageous man.  His actions had a significant detrimental impact on the efforts of others to provide ways for physicians to aid the terminally ill to end their lives on their own terms and with dignity. 

Consider for a moment the case of Diane, and imagine the circumstances she found herself in.  She had been raised by an alcoholic family when she was a child and had suffered a great number of torments in her life, including vaginal cancer as a young woman, clinical depression, and her own alcoholism.  When her physician diagnosed her with myelomonocytic leukemia, she was presented with the options:  She could proceed without treatment and survive for a few weeks or perhaps even a few months if she was lucky, but the last days of her life would surely be spent in pain and without dignity; it was not how she wanted her friends and family to remember her.  If she accepted the treatment her doctor had discussed, there was a 25% change of long-term survival, but the treatment itself — chemotherapy, bone marrow transplantation, irradiation — would also rob her of much of what she valued about life, and would likely result in as much pain as doing nothing.  For her, the 25% chance that such treatment would succeed was not worth it.  Others might have differed in their assessment, but this was hers. 

Neither option presented to her — let the disease run its course or accept a treatment she had rejected — was acceptable, and so she considered the unspoken alternative.  Diane’s physician told her of the Hemlock Society, even knowing that he could be subject to criminal prosecution and professional review, potentially losing his license to practice medicine.  But by having a physician who knew her involved in her decision, her mental state could be assessed to ensure that it was well-considered and not a result of overwhelming despair.  Her physician could explain how to use the drugs he prescribed — ostensibly to help her sleep — so that until the time came, she could live her life with confidence that she had control over when to end it.  She could enjoy the short time she had remaining without being haunted by fears that it would be ineffective or result in any number of consequences she did not want.  In the end, Diane died alone, without her husband or her son at her side, and without her physician there.  She did it alone so that she could protect all of them, but died in the way that she herself chose. 

The story of Diane is one that her physician, Dr. Timothy Quill, published in the New England Journal of Medicine in 1991.  A copy of it can be found here.  It was one of the first public accounts of a physician acknowledging that he had aided a patient in taking her own life.  It was to prompt a debate about the role of physicians at the end of life, and a subsequent study published by the same journal in 1996 found that about 20% of physicians in the United States had knowingly and intentionally prescribed medication to hasten their patients’ deaths. 

But the quiet, thoughtful, and sober approach adopted by Quill and many other physicians to the issue of physician-assisted suicide was very much derailed by the grandstanding antics of Kevorkian.  His theatrical flouting of the law, prompting law-enforcement agencies to act in making an example of him rather than seriously considering the merits of his views, were counterproductive to the medical debate. 

Kevorkian’s fascination with death was long part of his life.  He was not, as many believe, christened with the nickname “Dr. Death” because of his efforts promoting physician-assisted suicide.  That happened long before, during the 1950’s shortly after receiving his medical degree.  While a resident at the University of Michigan hospital, he photographed the eyes of terminally ill patients, ostensibly to identify the actual moment of death as a diagnostic method, but more truly “because it was interesting [and] a taboo subject.”  Later, he presented a paper to the American Association for the Advancement of Science advocating “terminal human experimentation” on condemned convicts before they were executed.  Another of his proposals was to euthanize death-row inmates so that their organs could be harvested for transplantation. 

His views have politely been described as “controversial,” but are perhaps more accurately considered gruesome and bizarre, such as his experiments aimed at transfusing blood from corpses into injured soldiers when other sources of blood were unavailable.  The result of his various investigations was considerable professional damage, causing him to resign or be dismissed from a number of medical centers and hospitals.  His own clinic failed as a business.  For all his current notoriety, Kevorkian was throughout his career considered very much an outsider to the mainstream medical-science community. 

In considering the legacy of Kevorkian, it is important to recognize the long history of the debate over physician-assisted suicide, which dates at least from the days of ancient Greece and Rome.  The modern debate in the United States has its origins in the development of modern anaesthesia.  The first surgeon to use ether as an anaesthetic, J.C. Warren, suggested it could be used “in mitigating the agonies of death.”  In 1870, the nonphysician Samuel D. Williams suggested the use of chloroform and other medications not just to relieve the pain of dying, but to spare a patient that pain completely by ending his life.  Although the proposal was made by a relatively obscure person, it attracted attention, being quoted and discussed in prominent journals and prompting significant discussion within the medical profession.  The various discussions culminated in a formal attempt to legalize physician-assisted suicide in Ohio in 1906, although the act was rejected by the legislature in a vote of 79 to 23. 

Today, there are three states that have legalized the practice of physician-assisted suicide — Oregon, Washington, and Montana.  The history of how that legislation came to pass, and the various court challenges that have been raised, is fascinating in its own right.  For now, suffice it to say that my own view is that those states legalized the practice because of the courageous efforts of physicians who are largely unknown, not because of the actions of Kevorkian.  Indeed their courage is all the greater that they achieved as much as they did despite his activities.

Is Your Scientific Malpractice Insurance Paid Up?

“Thanks to his intuition as a brilliant physicist and by relying on different arguments, Galileo, who practically invented the experimental method, understood why only the sun could function as the centre of the world, as it was then known, that is to say, as a planetary system.  The error of the theologians of the time, when they maintained the centrality of the Earth, was to think that our understanding of the physical world’s structure was, in some way, imposed by the literal sense of Sacred Scripture.”

 Pope John Paul II, November 4, 1992

 

Pope John Paul II did for the Catholic Church in 1992 what scientists do every single day in their professional lives:  admit to a mistake in understanding the nature of the universe.  Scientists do it because it is a fundamental part of the scientific method to acknowledge the failings in our understanding of the world, and because of our collective commitment to improving that understanding by refusing to become doctrinaire.  A scientist gains no higher respect from his peers than when he tells them he was mistaken and goes on to share what he has learned from that mistake so that they may continue the advance of knowledge.  It is this fundamental pillar of the scientific method that has single-handedly been responsible for its tremendous and astonishing successes. 

As the pope noted in his statement, Galileo was one of those who were responsible for devising such a brutal and uncompromising commitment to the evidence of our own eyes and ears in drawing conclusions about the world.  For this, he was condemned by the Church, sentenced to live under house arrest at his farmhouse in Arcetri, where he would have little to do other than grow blind and die.  It would not be until 1835 — more than 200 years after his conviction — that the Vatican would remove his Dialogue Concerning the Two Chief World Systems from its list of banned books and not until 1992 — more than 350 years after his conviction — that it would formally admit that it was wrong and Galileo was right.  (Some additional commentary that I have previously made about Galileo can be found here.) 

I find it unfortunate that it is again in Italy that ridiculous persecution of scientists is taking place.  It is not the Church this time, but rather the Italian state that is trying to hold scientists to a standard that fails to recognize the fundamental character of the scientific method.  On April 6, 2009, an earthquake struck Italy in the Abruzzo region, resulting in the death of more than 300 people and damaging thousands of building.  About 65,000 lost their homes and most of those were forced to live for weeks in makeshift “tendopoli” — tent cities — that were erected to house the quake refugees, a sad circumstance that Prime Minister Silvio Berlusconi thoughtlessly suggested was an opportunity for them to enjoy a “camping weekend.” 

The region had been experiencing Earth tremors for more than ten weeks in advance of the earthquake, and on March 30, a 4.0-magnitude earthquake struck the region.  There was concern among the public that a larger earthquake would follow, as indeed it did a week later.  A meeting of the Major Risks Committee, which provides advice to the Italian Civil Protection Agency on the risks of natural disasters was held on March 31.  Minutes from the meeting show that the following statements were made about the possibility of a major earthquake in Abruzzo:  “A major earthquake in the area is unlikely but cannot be ruled out”; “in recent times some recent earthquakes have been preceded by minor shocks days or weeks beforehand, but an the other hand many seismic swarms did not result in a major event”; “because L’Aquila is in a high-risk zone it is impossible to say with certainty that there will be no large earthquake”; “there is no reason to believe that a swarm of minor events is a sure predictor of a major shock” — all the sorts of cautious statements by scientists trying to place their understanding of the real risk in context of what they know about seismology and what they do not. 

But at a press conference later held by Bernardo De Bernardinis, a government official who was the deputy technical head of the Civil Protection Agency, reporters were told that “the scientific community tells us there is no danger, because there is an ongoing discharge of energy.”  The idea that small seismic events “release energy,” like letting a bit of steam out of a pressure cooker, is one that is soundly rejected by seismologists; the Earth does not function that way. 

The bizarre aftermath has been the bringing of charges of manslaughter against De Bernardinis and six seismologist members of the Major Risks Committee for their failure to properly warn the public of the danger.  The charges were brought almost a year ago, but a preliminary hearing was not held until last week because of delays resulting from requests by dozens of those damaged by the earthquake to receive civil compensation from the accused scientists.  Astonishingly, the result of the hearing was not an outright dismissal of the homicide charges, but instead a decision to proceed with a trial that will begin on September 20. 

To my mind, this case is an absurd attack on scientists, demanding an infallibility from them that they never claim.  As one of the indicted seismologists noted, there are hundreds of seismic shocks in Italy every year:  “If we were to alert the population every time, we would probably be indicted for unjustified alarm.”  These scientists face not only potential incarceration for twelve years if they are convicted of manslaughter, but also potential civil liability for property damage resulting from the earthquake.  The fact that this possibility is even being entertained is alarming:  It is likely to have a detrimental effect on the kinds of information scientists are willing to share with the public.  And if there is a realistic potential for civil liability arising from the kinds of statements that scientists routinely make, it may indeed make sense for scientists to seek malpractice insurance.  The very idea, though, that scientific research should be haunted by the threat of legal liability in the way that medicine is already, is more than troubling.