Crime and Punishment

In the year 2000, a Virginia woman received some of the worst news possible when her young daughter approached her to discuss some  things that had been making her uncomfortable.  The daughter recounted recent episodes of her 40-year-old stepfather (the woman’s husband) making subtle sexual advances towards her over the last several weeks.  Subsequent investigation by the wife uncovered her husband’s expanding cache of magazines devoted to child pornography, the collection of which had been accompanied by increasingly frequent visits to Internet pornography sites and the solicitation of prostitutes at local massage parlors.

It is easy to understand her reaction, particularly since her husband worked as a schoolteacher:  The result was his legal removal from their home, a conviction for child molestation, treatment with the chemical-castration drug medroxyprogesterone, and a requirement that he attend a rehabilitative twelve-step program for sexual addiction.  The program was unsuccessful — he persistently solicited sexual favors from both the staff and clients at the rehabilitation center.  He was accordingly sentenced to a prison term.

The case was a fascinating one because the specific cause of his pedophilia was identified by neuroscientists who treated him when he presented himself at the University of Virginia Hospital on the eve of beginning his prison sentence, complaining of a headache.  MRI scans identified an approximately egg-sized brain tumor located in the right lobe of his orbitofrontal cortex, an area of the brain known to be correlated with social integration, judgment, and the acquisition of a moral sense.  Damage to that area had previously been identified in certain sociopaths.  Removal of the tumor resulted in a reestablishment of his previous sense of morality, and he returned to his home.  When he began to complain of headaches in October 2001 and had resumed his secret collection of child pornography, a further MRI scan revealed regrowth of the tumor.  His pedophiliac behavior again disappeared after removal of the second tumor.

This case serves as a striking example for the emerging field of “neurolaw,” and I have chosen it for two reasons:  first, it presents one of the most clear illustrations of a link between a specific neurology and criminal behavior; second, it involves a crime that is so offensive that many — perhaps most — would agree that it should be punished even as we agree that it was not the man’s “fault” that he developed a brain tumor.  The example is one that figures in the report released by The Royal Society in the UK this month entitled “Neuroscience and the Law.”  A copy of the report can be found here and the original report by neuroscientists about the patient described above can be found here.  Other aspects of the interaction of neuroscience and law are also of interest, and I may discuss them in future posts, but today I want to limit the focus.  What should the role of criminal punishment be as neuroscience allows us greater insight into the mechanism by which some individuals commit crimes?

Legal philosophy generally identifies four justifications for criminal punishment, although there is sometimes blurring in how they are  categorized by different thinkers:  incapacitation (the criminal must be rendered unable to continue committing the offense); deterrence (punishment must be visible so that others who might be inclined to commit the offense will be deterred from doing so); retribution (the criminal must suffer in some proportion to the suffering he caused in his victims); and rehabilitation (the punishment should operate so that the criminal will understand and accept the wrongness of his actions in a way that will change his future behavior).  Different people place a different level of importance on each of these justifications depending on their own personal philosophies, but virtually everyone recognizes the legitimacy of at least some of them.

In thinking about pedophilia — even pedophilia induced by a brain tumor — it is impossible not to continue to accept incapacitation as a justification for punishment.  Removing the ability to commit the crime is important in protecting children, particularly in circumstances where the person is driven by impulses so strong that he is unable to control them.  Similarly, deterrence has a legitimate role to play in influencing those who may experience similar but controllable impulses by demonstrating the consequences if they give in to them.  And from a retributive perspective, there is no difference in the harm caused to a victim simply because the biological reasons for the  commission of the crime are better understood.

Where a greater understanding of the origins of a crime play the most significant role is in the rehabilitative role of punishment.  That greater understanding allows responses to be fashioned that take account of the underlying neurological causes of crimes so they can be corrected.

But lines can still be difficult to draw.  We all know — people have always known — that there are those born with a predisposition to commit certain kinds of crimes.  To understand where that predisposition comes from because of a better understanding of neurology in no way changes the fact that our societies do believe that punishment is an appropriate response even when that predisposition results from factors outside an individual’s control — genetics, birth defects, formative social environments and pressures, etc.

Pedophilia arouses strong reactions in people.  But the same philosophical principles that continue to demand that we punish it apply to every crime, including those that may not be viewed as passionately.  Even as we come to understand the human brain more fully and to develop an appreciation of how its structure may be linked to crime, the philosophical bases upon which we have traditionally justified punishment remain essentially unchanged.  Those philosophies have been debated for centuries and while there are always issues at the fringes, they are almost universally accepted.

Still, it is unsettling to accept the reality that neuroscience is exposing:  one day you might be punished as a child molester when your real crime is to have developed cancer.

Gene Patents

Not a day goes by that breast cancer does not figure in the news, at least at some level.  There are reports of new treatment options, reports of high-profile women who have been diagnosed, reports of improvements in early detection, and many other stories of how it affects our lives.  With approximately 12% of women likely to develop an invasive form of the disease sometime during their lives, there are few among us who are not impacted by it.

In 1994 and 1995, the BRCA1 and BRCA2 genes were isolated (the name of the genes is derived from “BReast CAncer”), representing one of the most significant advances in understanding the disease.  Women who carry mutations of these genes have an 82% risk of developing breast cancer and a 54% risk of developing ovarian cancer by age 80.

This ability to test for mutations has radically altered the lives of many women.  Before isolation of the genes, women who had a family history of breast cancer lived much of their lives under the assumption that it was merely a matter of time before they developed it also, and often feared that their daughters would one day endure the same heartbreak they had seen in their mothers.  Genetic tests for mutations now offer many greater options for women.  Those who test negative for the mutations are provided with reassurance not only that they are not at increased risk for breast cancer but that their children are also not carriers.  Those who test positive have a more realistic assessment of their actual risk and can avail themselves of options that range from increased monitoring to prophylactic mastectomy.

While no one disagrees that the ability to test for the genes is a positive development for women, many people object — strongly object — to the fact that there is a government-enforced monopoly on the right to administer such tests that was granted to the company Myriad Genetics.  Information about the test, called BRACAnalysis® is available at the web site here.  The cost of the test is somewhere around $400 for a single-mutation analysis (useful if the woman has a known family history involving a specific mutation) and somewhere greater than $3000 for a full-sequence analysis of both BRCA genes.  There is no question that these costs are elevated from where they would be in a competitive market because of the monopoly held by Myriad Genetics; if other companies were permitted to offer similar tests using the technology, competition mechanisms would drive costs lower.

The monopoly held by Myriad Genetics is, of course, a consequence of the patent laws.  Myriad Genetics owns a number of patents directed to isolated forms of the genes themselves and to methods for analyzing a patient’s BRCA sequence.  The patents grant them the right to prevent others from performing the BRCA tests, and they actively make use of that right.  Earlier this week, an appeal was filed with the U.S. Supreme Court challenging the validity of Myriad Genetics’ patents.  At the heart of the issue is the fundamental question:  should companies be allowed to patent genes?

Many believe that corporate profits in an area such as this are unseemly, and it is impossible not to have at least some sympathy with their point of view.  Currently, access to an important genetic test is denied to those women who do not have either the independent financial means or health insurance to pay for it.  A broad public policy that allows gene patents will inevitably result in other genetic tests having similar financial barriers.  Further, it has always been the case that laws of nature and physical phenomena cannot be patented.  Those opposed to gene patents essentially argue that human genes should not be patented because they are created by nature, not by men.

These are important and valid points that need to be considered by the Supreme Court if it chooses to accept the case.  But they fail to tell the whole story and are misleading when presented without a fuller context.

When the Myriad Genetics case was litigated in the original court, the argument that genes were products of nature prevailed, resulting in a ruling that the patents were invalid.  But this was overturned by the appellate court, which noted an important distinction between genes as they occur in the human body and genes that have been isolated:  “BRCA1 and BRCA2 in their isolated state are not the same molecules as DNA as it exists in the body; human intervention in cleaving or synthesizing a portion of a native chromosomal DNA imparts on that isolated DNA a distinctive chemical identity from that possessed by native DNA.”  A full copy of the appellate decision can be read here.  It is that distinction between genes as they occur in the human body  (only as a component of very long strings of chemically linked nucleotides) and genes after they have been isolated (where they consist of only about 0.1% of the nucleotides of the original DNA molecule) that is at the heart of why gene patents are currently valid.

Beyond this technical reason, though, it is worth recognizing the policy issues that argue in favor of allowing gene patents.  Foremost is the fact that the patent monopoly is temporary — the earliest of the Myriad Genetics patents on this technology will expire in about three years, after which the ability of competitors to provide similar services will progressively open up and drive down costs.  In evaluating the prudence of allowing gene patents, it is a mistake to look only at the circumstances during the period when the patents are enforceable — everyone agrees that monopolies have negative consequences on trade, but it is the price we temporarily agree to pay in order to have the technology commercialized in the first place.

Medical scientists also know that BRCA1 and BRCA2 are only a small piece of the puzzle that defines the genetics of breast cancer.  There are many other discoveries yet to be made, and the cost of the research to make those discoveries is large.  The possibility of obtaining a patent that allows a profit to be realized is a major incentive that promotes private investment in the research.

Consider the following choice.  Is it worse to endure a temporary period of time in which our access to technology is limited, leaving the right to profit from it exclusively with those who developed it?  Or is it worse to endure not having the technology at all?  The answer really is no different for genetics than it is for other technologies.

Life Finds a Way

“Broadly speaking, the ability of the park is to control the spread of life forms.  Because the history of evolution is that life escapes all barriers.  Life breaks free.  Life expands to new territories.  Painfully, perhaps even dangerously.  But life finds a way.”

I am among those who have both admired the works of Michael Crichton and been concerned that he has at times been overly alarmist.  I am thinking of his novel Prey, in particular, in which he describes the evolution of predatory swarms of self-replicating homicidal nanobots.  It was an entertaining-enough novel, but unrealistic in its portrayal of the dangers of nanotechnology.  Such is the prerogative of fiction.  I found his book Jurassic Park, from which the above quotation is extracted, to be more measured in its cautions.  Interestingly, Jurassic Park was written in 1990, fully more than a decade before an interesting real-life occurrence of what he was talking about.  In this case, it was not dinosaurs, of course, but corn.

One of the first so-called “plant pesticides” was StarLink corn, which was genetically engineered to incorporate genes from the bacterium Bacillus thuringiensis, which had been known for decades to produce insecticidal toxins.  When the Environmental Protection Agency registered StarLink in 1998, it was with the restriction that it be used as animal feed and in industrial products, and not to be consumed by humans as food.  But, as Michael Crichton pointed out years previously, life finds a way.

In September 2000, a group of environmental and food-safety groups known as Genetically Engineered Food Alert announced that it had discovered StarLink corn in Taco Bell taco shells, prompting the first recall of food derived from a genetically modified organism.  Things quickly escalated, with some 300 kinds of food items ultimately being recalled because of concerns about the presence of StarLink corn.  Corn farmers protested.  Consumers of corn protested.  And the machinery of government was set in motion through the Food and Drug Administration and the Department of Agriculture to cooperate with the producer of StarLink in containing its spread.

The story of StarLink is a cautionary one that highlights the difficulties that can exist in trying to constrain the will of Nature and has relevance for the increasing use of various forms of nanotechnology.  Materials that fall within the very broad umbrella that “nanotechnology” encompasses are now used in more than 1000 household products, ranging from cosmetics to tennis racquets.  Perhaps even more interesting, though, are the more recent uses of nanoparticles in bone-replacing composites and chemotherapy delivery systems.

The amazing potential of these technologies can be readily appreciated just by considering the delivery of chemotherapy to cancer patients.  There are known substances that can effectively kill tumors in many cases, but current delivery systems amount to using them in a way that increases the toxicity in a patient’s entire body — essentially trying to find that line of toxicity that will kill the tumor but not the patient, who becomes incapacitatingly ill with effects that include nausea, hair loss, bleeding, diarrhea, and many others.  The use of nanoparticles to deliver the substances directly to the tumors has the potential of both increasing the effectiveness of the treatment while dramatically reducing the negative impact on the rest of the patient’s body.

This week, I had the privilege of discussing legal aspects of nanotechnology with Dr. Hildegarde Staninger on her broadcast at One Cell One Light Radio.  A copy of the broadcast can be found here.  During our discussion, we touched on the capacity of nanoparticles, by virtue of their extraordinarily small size, to intrude unexpectedly into the environment.  There are known health risks associated with nanoparticles, such as the triggering of autophagic cell death in human lungs caused by polyamidoamine dendrimers, and there are surely unknown health risks as well.  We also discussed government regulation of nanotechnology, specifically how the very breadth of applications for nanotechnology makes that process difficult and how instead efforts have been made to incorporate nanotechnology into the existing regulatory framework.

Interestingly, this week saw one of the first attempts to deviate from that approach.  At the Nanodiagnostics and Nanotherapeutics meeting held at the University of Minnesota, an invited panel discussed draft guidelines developed with the support of the National Institutes of Health to provide for regulatory oversight of medical applications of nanotechnology.  The final recommendations will not be available for some time, and the usual rulemaking procedures for administrative agencies to allow for public comment will need to be completed.  But the draft recommendations provide insight into how a nanotechnology-specific regulatory framework might develop.  Copies of papers by the group published earlier this year can be found here and here (subscriptions required) and the (free) report on the conference recommendations by the journal Nature can be found here.

Briefly, the group appears to be converging on a recommendation for the creation of two additional bodies within the Department of Health and Human Services — an interagency group that consolidates information from other government agencies in evaluating risks and an advisory body that includes expert members of the public.  These strike me as good recommendations, and there is no doubt that the group considering them has weighed the merits and disadvantages of developing an oversight framework specific to the concerns presented by nanotechnology.

As I mentioned to Dr. Staninger during our discussion, it is very much my belief that dialogues that educate the public about the real risks of nanotechnology — not fictional psychopathic nanobot swarms — are needed in developing appropriate and effective regulation.  There are risks to nanotechnology, just as there are with every technology having such enormous benefit, and realistic management of those risks is a part of the process of exploiting them to our benefit.

Tending Towards Savagery

I am no particular fan of the monarchy, but Prince Charles was given a bad rap in 2003 when he called for the Royal Society to consider the environmental and social risks of nanotechnology.  “My first gentle attempt to draw the subject to wider attention resulted in ‘Prince fears grey goo nightmare’ headlines,” he lamented in 2004.  Indeed, while yet somewhat misguided, the Prince’s efforts to draw attention to these issues were genuine and not far from mainstream perceptions that scientists sometimes become so absorbed with their discoveries that they pursue them without sober regard for the potential consequences.  A copy of his article can be read here, in which he claims never to have used the expression “grey goo,” and in which he makes a reasonable plea to “consider seriously those features that concern non-specialists and not just dismiss those concerns as ill-informed or Luddite.” 

It is unfortunate that the term “grey goo” has becomes as inexorably linked with nanotechnology as the term “frankenfood” has become associated with food derived from genetically modified organisms.  The term has its origins in K. Eric Drexler’s 1986 book Engines of Creation

[A]ssembler-based replicators will therefore be able to do all that life can, and more.  From an evolutionary point of view, this poses an obvious threat to otters, people, cacti, and ferns — to the rich fabric of the biosphere and all that we prize…. 

“Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage.  Tough, omnivorous “bacteria” could out-compete real bacteria:  they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days.  Dangerous replicators could easily be too tough, small, and rapidly spreading to stop…. 

Among the congoscenti of nanotechnology, this threat has become known as the “gray goo problem.” 

Even at the time, most scientists largely dismissed Drexler’s description as unrealistic, fanciful, and needlessly alarmist.  The debate most famously culminated in a series of exchanges in 2003 in Chemical and Engineering News between Drexler and Nobel laureate Richard Smalley, whose forceful admonition was applauded by many: 

You and people around you have scared our children.  I don’t expect you to stop, but I hope others in the chemical community will join with me in turning on the light and showing our children that, while our future in the real world will be challenging and there are real risks, there will be no such monster as the self-replicating mechanical nanobot of your dreams.

 Drexler did, in the end, recant, conceding in 2004 that “[t]he popular version of the grey-goo idea seems to be that nanotechnology is dangerous because it means building tiny self-replicating robots that could accidentally run away, multiply, and eat the world.  But there’s no need to build anything remotely resembling a runaway replicator, which would be a pointless and difficult engineering task….  This makes fears of accidental runaway replication … quite obsolete.”  But too many others have failed to take note, as sadly highlighted by this month’s bombing of two Mexican professors who work on nanotechnology research. 

Responsibility for the most recent bombings, as well as other bombings in April and May, has been claimed by “Individualidades tendiendo a lo Salvaje” (roughly translated into English as “Individuals Tending Towards Savagery”), an antitechnology group that candidly claims Unabomber Ted Kaczynski as its inspiration.  The group even has its own manifesto.  It is not as long as the Unabomber’s but is equally contorted in attempting to justify the use of violence as a means of opposing technological progress.  A copy of the original manifesto can be read here and an English translation can be found here

The manifesto references Drexler when it cites the absurd rationale for the group’s violence: 

[Drexler] has mentioned … the possible spread of a grey goo caused by billions of nanoparticles self-replicating themselves voluntarily and uncontrollably throughout the world, destroying the biosphere and completely eliminating all animal, plant, and human life on this planet.  The conclusion of technological advancement will be pathetic, Earth and all those on it will have become a large gray mass, where intelligent nanomachines reign.

 No clear-thinking person supports the group’s use of violence.  But at the same time, there are many nonscientists who are suspicious of the motivations that underlie much of scientific research.  One need only look at the recent news to understand the source of that distrust:  just this week, the Presidential Panel for the Study of Bioethical Issues released a report detailing atrocities commited by American scientists in the 1940’s that involved the nonconsensual infection of some 1300 Guatemalans with syphilis, gonorrhea, or chancroid.  There are many other examples where scientists have engaged in questionable practices with a secrecy that is counter to the very precepts of scientific investigation. 

“Nanotechnology” is a wonderful set of technologies that have already found their way into more than 1000 commercial products being sold in the electronic, medical, cosmetics, and other markets.  But even though the use of nanotechnology is spreading, many remain concerned that it is unwise to allow it, even if they would not go so far as to bomb the scientists working on the technology.  Here I find myself sympathetic with the real message that Prince Charles was attempting to spread — namely, that the concerns of the nonscientist public need to be addressed, even if those concerns seem to be ill-conceived.

A Mean Act of Revenge Upon Lifeless Clay

Jack Kevorkian died today, and many are commenting about his role in the “right to die” movement.  While I am a supporter of the movement generally, I did not find Kevorkian to be a courageous man.  His actions had a significant detrimental impact on the efforts of others to provide ways for physicians to aid the terminally ill to end their lives on their own terms and with dignity. 

Consider for a moment the case of Diane, and imagine the circumstances she found herself in.  She had been raised by an alcoholic family when she was a child and had suffered a great number of torments in her life, including vaginal cancer as a young woman, clinical depression, and her own alcoholism.  When her physician diagnosed her with myelomonocytic leukemia, she was presented with the options:  She could proceed without treatment and survive for a few weeks or perhaps even a few months if she was lucky, but the last days of her life would surely be spent in pain and without dignity; it was not how she wanted her friends and family to remember her.  If she accepted the treatment her doctor had discussed, there was a 25% change of long-term survival, but the treatment itself — chemotherapy, bone marrow transplantation, irradiation — would also rob her of much of what she valued about life, and would likely result in as much pain as doing nothing.  For her, the 25% chance that such treatment would succeed was not worth it.  Others might have differed in their assessment, but this was hers. 

Neither option presented to her — let the disease run its course or accept a treatment she had rejected — was acceptable, and so she considered the unspoken alternative.  Diane’s physician told her of the Hemlock Society, even knowing that he could be subject to criminal prosecution and professional review, potentially losing his license to practice medicine.  But by having a physician who knew her involved in her decision, her mental state could be assessed to ensure that it was well-considered and not a result of overwhelming despair.  Her physician could explain how to use the drugs he prescribed — ostensibly to help her sleep — so that until the time came, she could live her life with confidence that she had control over when to end it.  She could enjoy the short time she had remaining without being haunted by fears that it would be ineffective or result in any number of consequences she did not want.  In the end, Diane died alone, without her husband or her son at her side, and without her physician there.  She did it alone so that she could protect all of them, but died in the way that she herself chose. 

The story of Diane is one that her physician, Dr. Timothy Quill, published in the New England Journal of Medicine in 1991.  A copy of it can be found here.  It was one of the first public accounts of a physician acknowledging that he had aided a patient in taking her own life.  It was to prompt a debate about the role of physicians at the end of life, and a subsequent study published by the same journal in 1996 found that about 20% of physicians in the United States had knowingly and intentionally prescribed medication to hasten their patients’ deaths. 

But the quiet, thoughtful, and sober approach adopted by Quill and many other physicians to the issue of physician-assisted suicide was very much derailed by the grandstanding antics of Kevorkian.  His theatrical flouting of the law, prompting law-enforcement agencies to act in making an example of him rather than seriously considering the merits of his views, were counterproductive to the medical debate. 

Kevorkian’s fascination with death was long part of his life.  He was not, as many believe, christened with the nickname “Dr. Death” because of his efforts promoting physician-assisted suicide.  That happened long before, during the 1950’s shortly after receiving his medical degree.  While a resident at the University of Michigan hospital, he photographed the eyes of terminally ill patients, ostensibly to identify the actual moment of death as a diagnostic method, but more truly “because it was interesting [and] a taboo subject.”  Later, he presented a paper to the American Association for the Advancement of Science advocating “terminal human experimentation” on condemned convicts before they were executed.  Another of his proposals was to euthanize death-row inmates so that their organs could be harvested for transplantation. 

His views have politely been described as “controversial,” but are perhaps more accurately considered gruesome and bizarre, such as his experiments aimed at transfusing blood from corpses into injured soldiers when other sources of blood were unavailable.  The result of his various investigations was considerable professional damage, causing him to resign or be dismissed from a number of medical centers and hospitals.  His own clinic failed as a business.  For all his current notoriety, Kevorkian was throughout his career considered very much an outsider to the mainstream medical-science community. 

In considering the legacy of Kevorkian, it is important to recognize the long history of the debate over physician-assisted suicide, which dates at least from the days of ancient Greece and Rome.  The modern debate in the United States has its origins in the development of modern anaesthesia.  The first surgeon to use ether as an anaesthetic, J.C. Warren, suggested it could be used “in mitigating the agonies of death.”  In 1870, the nonphysician Samuel D. Williams suggested the use of chloroform and other medications not just to relieve the pain of dying, but to spare a patient that pain completely by ending his life.  Although the proposal was made by a relatively obscure person, it attracted attention, being quoted and discussed in prominent journals and prompting significant discussion within the medical profession.  The various discussions culminated in a formal attempt to legalize physician-assisted suicide in Ohio in 1906, although the act was rejected by the legislature in a vote of 79 to 23. 

Today, there are three states that have legalized the practice of physician-assisted suicide — Oregon, Washington, and Montana.  The history of how that legislation came to pass, and the various court challenges that have been raised, is fascinating in its own right.  For now, suffice it to say that my own view is that those states legalized the practice because of the courageous efforts of physicians who are largely unknown, not because of the actions of Kevorkian.  Indeed their courage is all the greater that they achieved as much as they did despite his activities.