The Labor of Bees

In October last year, an elderly man in Dougherty County, Georgia was operating a bulldozer when he accidentally disturbed a bee colony.  He died after being stung more than 100 times, and investigations into his death confirmed that africanized honey bees have now taken up residence in the state of Georgia.  At least two other hives of africanized bees have since been discovered within the state. 

The first that many heard of africanized bees was seeing The Swarm sometime in the 70’s, an overly sensationalist movie in a decade that saw far too many melodramatic disaster films.  But the underlying premise of the movie — that africanized bees are more aggressive than the strains of honeybees introduced into North America by Europeans in 1691— was essentially accurate.  It was in 1957 that 26 Tanzanian queen bees were accidentally released into Brazil by a beekeeper who was attempting to breed a strain of bees that would have greater honey production than local bees, while also being better adapted to a tropical climate than European bees.  The africanized honey bees that have steadily been expanding their territory into the southern United States are directly descended from those Tanzanian queens. 

The concern to humans and animals over the encroachment of africanized honey bees is surely overblown; since their initial colonization in Texas in 1990, they have been responsible for fewer than 20 human deaths.  The greater concern is with their displacement of the European bee population because of a fundamental difference in their behavior traits.  Africanized bees put greater effort into colony reproduction than do European bees, who instead spend more time on the collection and storage of food, resulting in their critical role of pollenizing roughly 35% of the food supply. 

It seems, though, that the africanized bee is not even the greatest concern facing the European bee population.  In 2006, large-scale losses of managed honeybee colonies were noted in the United States and in parts of Europe.  The impact is perhaps felt nowhere more sadly than West Virginia where, in 2002, the honeybee was named the state’s official state insect.

While similar bee declines have been documented since as early as 1869, the recent reduction was considerably more severe, leading to its characterization as “colony collapse disorder.”  While some speculative ideas have been put forward as possible explanations, including the effects of mobile-telephone radiation (one begins to wonder what cell-phone radiation is not blamed for these days), genetically modified crops, or the effects of global climate change, there is little if any evidence for these mechanisms having an impact on bee colonies.  The reality is likely much more prosaic, with insect diseases and pesticides being the two causes that have received the most study.  Indeed, in October, a paper was published suggesting that the disorder was due to a combination of a virus and fungus that were found in every killed colony that was studied, leading some to claim that the mystery had been “solved.”  Time will tell, but a copy of the paper can be found here.  

What is interesting about the disorder from a legal perspective are the Pollinator Protection Act and the Pollinator Habitat Protection Act, both of which were introduced in Congress in 2007 to provide mechanisms through the Farm Bill to fund a number of programs to research and develop potential solutions.  These programs include the surveillance of pests and pathogens that affect honeybees as well as research into the biology of honeybees so that causes of the disorder can be better understood. 

Last month, the second annual report — also mandated by the bills — was released and can be found here.  It is fair to say that the reported results so far remain inconclusive.  The best hypothesis appears to be that the disorder is a result of multiple factors that may at times act in combination, making the problem a difficult one to solve. 

There is no question about the importance of honeybees to pollination, but the lack of a clear understanding of the population decline has begun to increase interest in promoting alternative pollinators.  While the European honeybee is undoubtedly the most important, it is estimated that there are about 4000 species of bee native to the United States.  Changes in farming practices to promote the activities of these other bee species might help to accommodate the decline.  But one is left with the obvious question:  If the mechanism that is affecting honeybees remains poorly understood, what is the chance that it will eventually spread to other bee species?

Me: 12% Patent Pending

We did not always understand the role of germs in causing disease.  Indeed, the germ theory of disease was controversial until Louis Pasteur reported the results of decisive experiments that demonstrated that micro-organisms are responsible for the spoiling of milk, wine, and beer.  The eponymous method of pasteurization that he developed to kill the micro-organisms is still used today.  Extending the idea of infection by micro-organisms led him to solve what were at the time mysteries of how rabies, anthrax, and silkworm diseases were caused.  It led him to diverse discoveries important both in the development of vaccines and in understanding the scientific basis for fermentation to make beer and wine. 

It was in this last area that I am particularly interested in today, because it was in that area that Pasteur was granted one of the very earliest patents for an organism.  Some speculate that the claim in his U.S. Pat. No. 141,072 (which can be found here) slipped in quietly, its importance not fully understood by the Patent Examiner.  But whether the Examiner was conscious of it or not, Pasteur was granted a patent on July 22, 1873 in which he claimed “Yeast, free from organic germs of disease, as an article of manufacture.” 

Almost exactly 100 years after Pasteur filed his patent application, Ananda Mohan Chakrabarty would become famous in the would of patent law when he genetically engineered a bacterium derived from the Pseudomonas genus so that it effectively broke down the components of crude oil.  The idea was that it would be useful in dealing with oil spills.  But the issue that was perhaps finessed by Pasteur then came squarely before the Supreme Court of the United States:  can a lifeform be patented?  It was a close call.  A bare majority of the Court, guided by Congress’s assertion when it passed the modern Patent Act that it intended to “include anything under the sun that is made by man,” concluded that genetically engineered organisms could be patented.  A copy of the 1980 decision can be read here.  

Things have only continued to get more interesting.  Indeed, in the world of patent law, the most famous animal is undoubtedly the Harvard oncomouse, the first animal in the world created by genetic engineering.  In 1984, Philip Leder and Timothy Stewart of Harvard  University introduced a specific gene — called an “activated oncogene” — into mice, deliberately increasing the mice’s susceptibility to cancer, specifically in a way that generally mimics the course of the disease in humans.  The idea was that the high susceptibility to cancer would make the oncomouse an ideal research tool so that any variety of experiments involving cancer would be simplified, whether for treatment of tumors or simply to understand the mechanism by which tumors are created. 

The oncomouse was patented in the United States on April 12, 1988 and in Europe on May 13, 1992.  Similar patents have also issued in Japan and New Zealand.  While the general acceptance by most countries that genetically modified organisms are patentable has almost certainly contributed to promoting efforts in the biotechnology research fields, many people still have considerable discomfort with it.  Consider, for instance, that the Supreme Court of Canada rejected the patenting of the oncomouse, even over dissents that wanted to follow the American example more closely.  A copy of the Canadian decision can be found here.  

Even more alarming to many is that since Chakrabarty the U.S. Patent and Trademark Office has been issuing patents that are directed not only to engineered organisms and not only to synthetic DNA but also to unaltered genomic DNA — human genes.  Indeed, there are currently about 40,000 patents that cover sections of the human genome:  of the roughly 25,000 or so genes that define “me,” about 3,000 of them have some kind of intellectual property claim over them.  Somewhat surprisingly, no U.S. court has actually ruled on whether these kinds of patent claims are legitimate — until now. 

In Association for Molecular Pathology v. U.S. Patent and Trademark Office, a lower court ruled that such claims are not patentable.  The crux of the reasoning is quite simply that even if we accept that anything under the sun made by man might be patentable — those things that are made by nature, like genomic DNA, are not patentable.  A copy of the decision can be read here.  

The case is currently under appeal.  What surprised many is not that the case was appealed.  After all, the conclusion is a critically important one that will not only affect the validity of tens of thousands of patents, but will also have numerous ripple effects:  funding for research programs investigating genetic disease will be affected, the viability of companies employing thousands of people to research genetic disease will be impacted, and there will be some effect on how quickly results of that research develop into effective treatments for people.  The underlying question is the age-old one:  does granting patents in this area promote or inhibit technological progress?  What did surprise many is that the U.S. Justice Department filed an amicus brief in the case last month arguing that such claims are not patentable.  A copy of the government’s brief can be found here.  

It is astonishing to think that something like 12% of me has been patented and my instinctive reaction is to be offended.  It seems so intrusive.  They’re my cells — hands off.  The issue raises deep and profound questions that need to be addressed.  But at the same time, patents are temporary, lasting only twenty years, and it is very possible that issuing patents on genes can spur medical marvels.  Enduring a relatively brief period with what is mostly an abstract offense might not be so terrible. 

Perhaps in time, our descendants will consider the astonishing freedom they have from genetic disease and will consider us fondly.  “It was they who literally gave up ownership of themselves for our health.”

By Any Other Name, It Would Taste as Sweet

It’s almost amusing.  Actually, it is amusing.  If Larry Page and Sergey Brin had not decided in 1997 to rename their now-famous search engine at Stanford University, it is entirely possible that much of our dialogue about the Internet would be filled with strange innuendo.  Imagine a friend is curious about some topic or other.  But for that change in name, you might be telling him, “Oh, I don’t know.  Just backrub it.” 

Which name is the better one, Google or Backrub?  “Backrub” was descriptive in a way.  The innovation of the search engine that we now call Google was that it ranked the importance of search results according to the number of backlinks that a web page had.  It was a different approach than most other search engines at the time used. 

We are probably no worse off today because of the name change.  Indeed we are probably better off, with Google being considered by some to be one of the best company names.  It takes only a quick backrub of the Internet — perhaps that does not sound so awful after all — to come up with many companies who have changed their names to avoid negative connotations with their name.  Yahoo! is surely much better than the far-too-parochial Jerry’s Guide to the World Wide Web.  And people are still not entirely sure whether KFC adopted that name in 1999 to avoid connections with “Kentucky” (the state by that name trademarked its name and introduced substantial licensing fees), “Fried” (the company did not want its food products to seem totally unhealthy), or “Chicken” (government regulators were pressuring the company about its livestock practices). 

But when does renaming something cross the line into becoming deceptive?  In the last several years, high fructose corn syrup has been increasingly disparaged.  It is found in many processed foods, most notably in soft drinks but also in soups, lunch meats, breads, cereals, condiments, and may others.  It has been blamed for the high levels of obesity in the United States and as a contributor to a number of health issues.

The name “high fructose corn syrup” is essentially an accurate description.  Derived from corn, this sweetener comes in a number of different varieties, all of which have a mixture of fructose and glucose.  The most widely used variety, HFCS 55 has about 55% fructose and 42% glucose as compared with about equal amounts of fructose and glucose in sucrose (sugar).  Critics point to the different metabolic pathways of fructose and glucose in the body, namely that fructose consumption does not result in the body’s production of leptin, which is an important substance in signalling the brain to stop sending hunger signals.  It is fair to say, though, that there is a lack of consensus on the full impact of sucrose versus high fructose corn syrup consumption. 

The Corn Refiners Association takes the view that both high fructose corn syrup and sucrose have “similar” glucose-to-fructose ratios, and that those ratios are similar to the ratios found in natural fruits and vegetables.  Responding to advice from many quarters to reduce consumption of high fructose corn syrup, the Association petitioned the Food and Drug Administration (“FDA”) in September to change the name.  “Corn sugar” is the name they prefer and believe it remains free of an unjustified stigma.  Their views can be be found here

There can be no doubt that in petitioning for a name change the corn industry is responding to criticisms about consumption levels of high fructose corn syrup.  What is really at issue is whether changing the name is intended to be helpful in giving the public accurate information or is intended to be deceptive. 

One name change that the Association points to in making its case to the FDA is the FDA’s approval in 2000 to rename prunes as “dried plums.”  That action was initiated by the California Prune Board, which asserted that the public associated negative imagery with the name “prunes.”  Indeed, after approval by the FDA, the Board itself was renamed and is now the California Dried Plum Board, still active in promoting the value of prunes, no matter what they call them. 

The comparison that the corn industry wishes to make strikes me as a weak one, though.  While the word “prune” does sound decidedly less palatable than “dried plum,” allegations of unhealthiness of the product were not part of the motivation for seeking a name change.  And the purchase of prunes is made in a very different way than the purchase of high fructose corn syrup.  Someone purchasing prunes knows exactly what they are buying, but no one really goes to the grocery store to pick up some high fructose corn syrup — it’s just there, so ubiquitously present that those who wish to avoid consuming it need to expend some fair effort in finding products that do not contain it. 

Consumers deserve to know what they are buying and deserve to have useful information so they can decide for themselves what they wish to consume.  This is true even if the scientific research about the health effects of products is unclear.  It is even true if people are being irrational about what they choose to consume.  It is, after all, their bodies and their responsibility to inform themselves about the health effects of what they eat.  The precise contours of the corn industry’s motivation in seeking a name change thus need to be an essential part of the FDA’s deliberations. 

It will be some time before the FDA reaches a decision.  In the meantime, documents filed in connection with the petition, including comments from the public, can be found here.  It is worth emphasizing the importance of these public comments, which very often have a real impact on the decisions reached by government agencies.

Poisoned Needles

One of the most impressive stories of global human cooperation began in 1796 with a young dairymaid named Sarah Nelms.  The stories had been apocryphal, mere legends whose truth was suspect:  dairymaids did not get smallpox. 

Smallpox, of course, was one of the great scourges mankind has faced.  Evidence has been found for the disease in Egyptian mummies who died at least three thousand years ago.  Human history is rife with descriptions of smallpox decimating local populations when it was introduced to areas where it was previously unknown.  Smallpox epidemics in North America after introduction of the disease by settlers at Plymouth in 1633 are estimated to have had 80% fatality rates among the native population.  Similar results occurred later in Australia with aborigines.  Numerous isolated island settlements in both the Pacific and Atlantic had almost all of their native populations wiped out by the disease. 

The fact that dairymaids seemed to have a peculiar immunity to smallpox was remarkable, a consequence of the fact that they had frequently suffered from cowpox, a much less fatal disease.  It was Edward Jenner who recognized that deliberate infection with cowpox could serve as a way to protect against smallpox.  On May 14, 1796, he obtain matter from fresh cowpox lesions on the hands and arms of Sarah Nelms, using it to infect James Phipps, an eight-year old boy.  James developed fever, loss of appetite, and discomfort in his armpits — symptoms of cowpox — and recovered about a week and a half after being infected.  A couple of months later, Jenner infected him again, but this time with matter obtained from smallpox lesions, and no disease developed. 

It would take Jenner some years to persuade scientific colleagues that “vaccination” in this way — the word being derived from the latin “vacca” for cow and “vaccinia” for cowpox — could prevent the spread of smallpox.  He was ultimately vindicated.  The end result of his campaign came on December 9, 1979 (and endorsed by the World Health Assembly on May 8, 1980) with a certification that smallpox had been eradicated from the planet except for some small stores maintained for research purposes. 

On October 12, the Supreme Court of the United States will consider oral arguments in what is likely to be a pivotal vaccination case.  At issue is a portion of the National Childhood Vaccine Injury Act, originally passed in 1986 (42 U.S.C. §§ 300aa-1 – 300aa-34).  Aside from its independent relevance, the Vaccine Act is interesting because it can be seen as a case study for many current efforts to introduce broader tort reform prompted by the concern, popular in some circles, that compensation awarded in tort cases is excessive and that litigation is generally an ineffective mechanism for giving compensation. 

Vaccine cases have traditionally been handled under state law, particularly conventional products-liability law.  The Vaccine Act was intended to provide a substitute mechanism for compensating those who may have been injured by vaccines that they received.  At the time, there was active protest by the manufacturers of vaccines, who complained that the cost associated with lawsuit threats was too high and who threatened to cease production of vaccines because the economic risk was too great.  At the same time, those who were injured by vaccines were generally dissatisfied — the time taken to litigate claims and engage in settlement negotiations was long and the entire process was costly, still sometimes resulting in no compensation at all.  These are all arguments that are still currently made more broadly by advocates of broader tort reform. 

Congress created an administrative program that focuses on compensation to the victims of vaccine-related harms.  If a victim can demonstrate receiving a vaccination of a particular type listed in a special Vaccine Injury Table, he or she is awarded compensation, without regard to either fault or causation.  This takes place in the Office of Special Masters of the U.S. Court of Federal Claims, commonly called the “Vaccine Court.”  If the compensation is unsatisfactory or if no award is made, the possibility still exists to bring a tort claim.  The program thus attempts to strike a balance between allowing claims to be brought in the traditional manner, while at the same time offering an alternative that results in faster and more predictable claims being paid.  How much the program is used thus offers an interesting perspective on the success of such a tort alternative. 

In exchange for the essentially automatic awarding of claims, the Vaccine Act preempts tort claims arising from “unavoidable” injuries: 

No vaccine manufacturer shall be liable in a civil action for damages arising from a vaccine-related injury or death associated with the administration of a vaccine … if the injury or death resulted from side effects that were unavoidable …

It is this provision that is at issue in the case being heard by the Supreme Court.

The fact that an injury is “unavoidable” from use of a product does not normally insulate the manufacturer of the product from having to pay damages when it injures someone.  Indeed, “design defects” of products frequently give rise to damages — it does not matter that the product was flawlessly produced in precise accordance with specifications and with extreme care; if it has a faulty design that injures people, there is still liability for the injuries. 

In 2008, the Georgia Supreme Court held that the Vaccine Act does not preempt all design defect claims — only those where the injurious side effects were unpreventable.  That decision can be read here.  In 2009, a federal appeals court ruled that all design-defect claims are preempted, i.e. even if it were possible to prepare a safer form of the vaccine, there is still no permissible state tort claim arising from use of the more harmful design. That opinion can be read here

While the issue the Supreme Court will consider appears at some level to be a narrow one in that it is a matter of resolving a disagreement over statutory interpretation, it has broader importance.  In the quarter century since it was passed, about two thirds of those applying for federal injury compensation have been turned away empty-handed.  The program was sold as one in which the adversarial nature of litigation claims was to be avoided, but critics suggest that proceedings before the Vaccine Court have turned out to be nearly as time-consuming, expensive, and contentious as traditional litigation.  Many thus consider this experiment in tort reform to have been a failure, and a decision on one of the Vaccine Act’s more important provisions by the nation’s highest court will help determine whether such a view is warranted.

WARNING: THIS BLOG MAY BE ADDICTIVE

Well, I have to confess that there is a part of me that hopes so! But my title today is, at least to me, very obviously a spoof.

People need to be careful about spoofs though. Sometimes they take on a life of their own.

Consider Ivan Goldberg, a respected physician who specializes in treating individuals with mood disorders. He fabricated the term “Internet Addiction Disorder” in 1995. It was intended to be satirical, and he patterned his list of diagnostic criteria for the disorder after the entry for pathological gambling in the Diagnostic and Statistical Manual of Mental Disorders (“DSM”) — the authoritative work on psychiatric disorders published by the American Psychiatric Association. A copy of Goldberg’s parody can be found here.

Perhaps it was the technical way it was written, or perhaps it was because it struck a nerve among those who sometimes feel widowed as a result of significant Internet use by loved ones, but the parody caught on. Some thought it was real — even a few serious and respected publications were fooled — and many people began to accept that Internet Addiction Disorder was legitimately recognized.

In the fifteen years since Goldberg’s spoof, there has been increasing research on the effect of a variety of modern forms of technology on the human brain. One of the more interesting is the Starcraft study. Starcraft is a military science-fiction strategy game that has received enthusiastic accolades from those in the video-game industry, who have praised it as one of the best video games ever. The multiplayer version of it has gained particular popularity in South Korea.

Because it is so good, a lot of people want to play it. And some of them spend a lot of time playing it. Of the eleven participants in a the study, six had dropped out of school for two months because of the amount of time they were playing Starcraft and two were divorced by their spouses as a direct result of the time they put into the game. Psychiatrists at the Chung Ang University in South Korea tested a treatment of this group by prescribing the antidepressant Bupropion for six weeks, resulting in their playing time decreasing by about a third. MRI studies of their brains by the Brain Institute at the University of Utah showed increased brain activity in three areas that was not present in a control group when shown images from the game.

Many people remain dismissive of the idea that there can be such a disorder as Internet Addiction, and suggest that the treatment with an antidepressant works because those individuals were depressed — under that view, the excessive time playing Starcraft is a sympton of depression, not of a new type of disorder. They perhaps spend so much time playing in an effort to escape from the sadness that their depression causes. Known psychiatric disorders of obsession and compulsion are other likely candidates for disorders that are sometimes manifested by excessive Internet usage and game playing.

The game Lineage II is another multiplayer online game, also extremely popular in South Korea. Instead of the science-fiction theme of Starcraft, though, Lineage II has a fantasy theme. The makers of the game, NCsoft Corporation and NC Interactive, Inc., were sued about a year ago by Craig Smallwood, a 51-year-old resident of Hawaii who claims that he became addicted to the game. His complaint asserts that over the period of 2004-2009, he played the game some 20,000 hours — that comes out to an average of 11 hours a day, every day of the year, for five years. When his access to the game was cut off, he claims to have “suffered extreme and serious emotional distress and depression,… been unable to function independently,… suffered psychological trauma,… was hospitalized, and … requires treatment and therapy three times a week.”

In a decision in August of this year, the judge in the case, Alan C. Kay, dismissed some of the causes of action but declined to dismiss them all. The causes of action that remain, and which Smallwood presumably will now attempt to prove, include defamation, negligence, gross negligence, and negligent infliction of emotional distress. The full opinion (which considers a number of procedural issues at some length) can be read here.

Although some commentators have ridiculed the decision, finding it stretches credibility to allow a cause of action to proceed on the basis of Internet addiction, it is worth noting that this ruling is still at a very early stage of the litigation. This was a ruling on a motion brought by the defendants, meaning that the plaintiff’s assertions were necessarily considered in their most favorable light and assumed to be true. Those claims that were dismissed — misrepresentation, unfair trade practices, intentional infliction of emotional distress, and assessment of punitive damages — were found by the court to have no merit even if all of the assertions were true and construed in that most favorable way.

It is likely to be an arduous and uphill battle now to prove his assertions and to prevail on the surviving causes of action. Currently, Internet addiction is not a disorder recognized by the DSM and this is likely to be an important factor in the remainder of the litigation. It is also true, though, that some serious researchers advocate including Internet addiction in the next edition of the DSM, currently scheduled for release in 2013. This advocacy has been formal, appearing in prestigious peer-reviewed journals such as the American Journal of Psychiatry, but it still appears that those in the psychiatric-research community who advocate including it currently remain in the minority.

The ultimate decision of the American Psychiatric Association will likely have a strong impact on litigation. Courts will take notice of such an expert assessment, whichever way it goes. If the next edition of the DSM includes Internet addiction as a disorder, expect to see many more lawsuits like those brought by Smallwood, and expect them to have a greater chance of success than currently seems likely. Also expect the producers of Internet content to respond and seek ways to avoid any liability.

I’m putting myself out ahead of the curve. You’ve been warned.