You Get What You Pay For

Be careful about provoking scientists.  They can be persistent.

The issue I’m writing about today was first raised a quarter century ago when Henry Herman (“Heinz”) Barschall published two papers in Physics Today about the cost of academic journals.  He performed analyses of various physics journals at the time, based on some widely used metrics that attempt to quantify the “value” of a journal in terms of its reach and citation frequency.  His analyses showed that according to his criteria, physics journals published by the American Institute of Physics (“AIP”) and by the American Physical Society (“APS”), two not-for-profit academic societies devoted to physics in the United States, were more cost effective than other journals, particularly journals produced by commercial publishers.  Copies of his articles can be found here and here.

His articles prompted a series of litigations in the United States and Europe alleging that they amounted to little more than unfair advertising to promote the academic-society journals that Barschall was associated with.  At the time, he was an editor of Physical Review C, a publication of the APS, and the magazine Physics Today where he published his findings was a publication of the AIP.  (In the interest of disclosure, I was also previously an editor of an APS publication and also recently published a paper in Physics Today.  I have also published papers in commercial journals.)  The academic societies ultimately prevailed in the litigations.  This was relatively easily accomplished in the United States because of the strong First Amendment protection afforded to publication of truthful information, but was also accomplished in Europe after addressing the stronger laws that exist there to prevent comparisons of inequivalent products in advertising.  A summary of information related to the litigations, including trial transcripts and judicial opinions can be found here.

The economics of academic-journal publication are unique, and various journals have at times experimented with different models in order to address the issues that are particular to the publication of academic journals.  They have as their primary function to disseminate research results and to do so with a measure of reliability that is obtained by their use of anonymous peer review.  Despite their importance in generating a research record and in providing information of use to policymakers and others, such journals tend to have a small number of subscribers but relatively large production costs.  Consider, for instance, that Physical Review B, the journal I used to work for and one of the journals that Barschall identified as most cost-effective, currently charges $10,430 / year for an online-only subscription to the largest research institutions, and charges more if print versions are desired.

While commercial journals have generally relied upon subscription fees to pay their costs, academic-society journals have been more likely to keep subscription fees lower by imposing “page charges” that require the authors to pay a portion of the publication costs from their research budgets.  The potential impact on their research budgets has sometimes affected researcher decisions about where to submit their papers.  More recent variations on economic models distinguish among subscribers, charging the highest subscription rates to large institutions where many researchers will benefit from the subscription, and charging lower subscription rates to small institutions and to individuals.  All of these economic models have needed to compete more recently with the growing practice of posting research papers in central online archives, without fee to either researcher or reader.  The reason that traditional journals still exist despite the presence of these online archives is that they provide an imprimatur of quality derived from their use of formalized peer-review considerations that are still relied on by funding bodies and other government agencies.

As reported this weekend in The Economist, Cambridge University mathematician Timothy Gowers wrote a blog post last month that outlined his reasons for boycotting research journals published by the commercial publisher Elsevier.  A copy of his post can be read here.  The post has prompted more than 2700 researchers to sign an online pledge to boycott Elsevier journals by refusing to submit their work to them for consideration, by refusing to referee for them, and by refusing to act as editors for them.  My objective is not to express an opinion whether such a boycott is wise or unwise.  The fact is that journals compete for material to publish and for subscriptions.  While they accordingly attempt to promote distinctions that make them more valuable than other journals, they are also affected by the way their markets respond to those distinctions.

What is instead most interesting to me in considering the legal aspects of this loose boycott is the extent to which it is driven by a general support among commercial publishers for the Research Works Act, a bill that was introduced in the U.S. Congress in December and that would limit “open access” to papers developed from federally funded research.  The Act is similar to acts that have been proposed in 2008 and 2009.  Specifically, it would prevent federal agencies from depositing papers into a publicly accessible online database if they have received “any value-added contribution, including peer review or editing” from a private publisher, even if the research was funded by the public.

The logic against the Act is compelling:  why should the public have to pay twice, once to fund the research itself and again to be able to read the results of the research it paid for?  But this logic is very similar to the arguments raised decades ago against the Bayh-Dole Act, which allowed for patent rights to be vested in researchers who developed inventions with public funds.  The Bayh-Dole Act also requires the public to pay twice, once to fund the research itself and again in the form of higher prices for products that are covered by patents.  Yet by all objective measures, the Bayh-Dole Act has been a resounding success, leading to the commercialization of technology in a far more aggressive way than had been the case when the public was protected from such double payments — and this commercialization has been of enormous social benefit to the public.

The Bayh-Dole Act has been successful because it provides an incentive for the commercialization of inventions that was simply not there before.  This experience should not be too quickly dismissed.  In the abstract, the Research Works Act is probably a good idea because it provides an incentive for publishers to provide quality-control mechanisms in the form of peer review that allows papers to be distinguished from the mass of material now published on the Internet without quality control.  This is worth paying for.  But the Act also has to be considered not in the abstract, but instead in the context of what not-for-profit academic societies are already doing in providing that quality control.

The ultimate question really is:  Are commercial publishers providing a service that needs to be protected so vitally that it makes sense to subject the public to double payment?  The history of the Bayh-Dole Act proves that the intuitive answer of “no” is not necessarily the right one.  But what the commercial publishers still need to prove in convincing the scientific community that the answer is “yes” is, in some respects, no different than the issue Heinz Barschall raised 25 years ago.

Knowing Sin

One of the most famous quotations attributed to J. Robert Oppenheimer was made in a lecture he delivered at MIT in 1947, a little more than two years after the destruction caused by detonation of two atomic bombs in Japan to bring a decisive end to the second World War.  The more than 100,000 deaths that resulted from one of the best organized scientific projects in history still epitomize the potential that scientific activities have in providing tools for devastation.  Oppenheimer said:

Despite the vision and farseeing wisdom of our wartime heads of state, the physicists have felt the peculiarly intimate responsibility for suggesting, for supporting, and in the end, in large measure, for achieving the realization of atomic weapons.  Nor can we forget that these weapons as they were in fact used dramatized so mercilessly the inhumanity and evil of modern war.  In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose.

J. Robert Oppenheimer

I was reminded of Oppenheimer earlier this month when the U.S. National Science Advisory Board for Biosecurity (“NSABB”) made the decision to interfere with publication of scientific research in the name of security.  The NSABB is an advisory committee whose origins are found in the Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology, convened by the National Academies in 2002 when the widespread fear precipitated by the anthrax attacks of 2001 was still fresh in people’s minds.  The so-called “Fink Report,” eponymously named after the chair of the Committee, included a number of recommendations intended to “ensure responsible oversight for biotechnology research with potential bioterrorism applications,” one of which was the creation of the NSABB.  A copy of the report can be found here.  The primary focus of the NSABB is oversight of “dual-use research,” i.e. biotechnology research that may have both legitimate scientific purposes and that may be misused to cause threats to public health.  In many ways, the comparison with nuclear research is apt because of the potential for nuclear research to find beneficial applications in power generation and medical imaging as well as its infamous destructive applications.

The action taken this month by the NSABB represents its first intrusion into the independent publication practices of scientific journals.  The issue is research on H5N1 bird-flu mutations by Dutch scientists that would allow considerably easier human transmission of the virus — which has mortality rates in the neighborhood of 60%.  While there is a fear of potential terrorist uses, there is no question that the research also has important public-health and viral-research implications.  In its press release, the Board noted that it had asked editors of Science and Nature, as well as the authors involved, to suppress what is, by any measure, information long viewed as essential to the need in scientific research to reproduce the results of others:  “[T]he NSABB recommended that the general conclusions highlighting the novel outcome be published, but that the transcripts not include the methodological and other details that could enable replication of the experiments by those who would seek to do harm.”  A copy of the full press statement issued by the Board can be found here.

The whole notion of limiting access of details only to those who would not “seek to do harm” is, of course, problematic.  One need only recall the anthrax attacks of 2001, which in many ways provided the original impetus for formation of the NSABB itself.  During investigations of those attacks, the two individuals most prominently identified as the subjects of interest by federal prosecutors were U.S. biodefense researchers who had access to classified information.  Restriction on publication of information by journals like Science and Nature would have had no effect in preventing those attacks, but would still remove valid scientific information from the public archive.

Many viral scientists have objected to the move by the NSABB, characterizing it as a form of censorship, which it indeed is.  I admit to considerable sympathy with the views of those critics, and again turn to Oppenheimer, whose words capture the sentiment that most scientists share:

There must be no barriers to freedom of inquiry….  There is no place for dogma in science.  The scientist is free, and must be free to ask any question, to doubt any assertion, to seek for any evidence, to correct any errors.  Our political life is also predicated on openness.  We know that the only way to avoid error is to detect it and that the only way to detect it is to be free to inquire.  And we know that as long as men are free to ask what they must, free to say what they think, free to think what they will, freedom can never be lost, and science never regress.

Is this a hopelessly naïve and unrealistic view?  While I desperately wish it were not, and while I normally argue as passionately as I am able that science is better for its openness, I also have sober moments when I pause uncertainly.  I encourage those who wish to understand both sides of this particular issue to read the blog post by virologist Vincent Racaniello here, particularly the comment added by NSABB member Mike Imperiale.

As an attorney, I frequently find myself defending reports of jury decisions that strike much of the public as outlandish.  My usual response is that it is astonishingly presumptuous to suppose that spending five minutes reading a short news report of a verdict can in any way compare to the weeks of deliberation and examination of documents that caused twelve people to come to some agreement about the issue.  As I face my own initial distaste for suppression of legitimate scientific information, I think of the time that the members of NSABB presumably spent grappling with these issues and I am haunted by my own argument — they are far more knowledgeable about biology than I am and surely as sensitive to the need for science to operate in an atmosphere of openness.

Still, I expect that no matter how genuine their efforts to prevent it, it is merely matter of time until biologists know sin in the way physicists of the 1940’s did.

Life Finds a Way

“Broadly speaking, the ability of the park is to control the spread of life forms.  Because the history of evolution is that life escapes all barriers.  Life breaks free.  Life expands to new territories.  Painfully, perhaps even dangerously.  But life finds a way.”

I am among those who have both admired the works of Michael Crichton and been concerned that he has at times been overly alarmist.  I am thinking of his novel Prey, in particular, in which he describes the evolution of predatory swarms of self-replicating homicidal nanobots.  It was an entertaining-enough novel, but unrealistic in its portrayal of the dangers of nanotechnology.  Such is the prerogative of fiction.  I found his book Jurassic Park, from which the above quotation is extracted, to be more measured in its cautions.  Interestingly, Jurassic Park was written in 1990, fully more than a decade before an interesting real-life occurrence of what he was talking about.  In this case, it was not dinosaurs, of course, but corn.

One of the first so-called “plant pesticides” was StarLink corn, which was genetically engineered to incorporate genes from the bacterium Bacillus thuringiensis, which had been known for decades to produce insecticidal toxins.  When the Environmental Protection Agency registered StarLink in 1998, it was with the restriction that it be used as animal feed and in industrial products, and not to be consumed by humans as food.  But, as Michael Crichton pointed out years previously, life finds a way.

In September 2000, a group of environmental and food-safety groups known as Genetically Engineered Food Alert announced that it had discovered StarLink corn in Taco Bell taco shells, prompting the first recall of food derived from a genetically modified organism.  Things quickly escalated, with some 300 kinds of food items ultimately being recalled because of concerns about the presence of StarLink corn.  Corn farmers protested.  Consumers of corn protested.  And the machinery of government was set in motion through the Food and Drug Administration and the Department of Agriculture to cooperate with the producer of StarLink in containing its spread.

The story of StarLink is a cautionary one that highlights the difficulties that can exist in trying to constrain the will of Nature and has relevance for the increasing use of various forms of nanotechnology.  Materials that fall within the very broad umbrella that “nanotechnology” encompasses are now used in more than 1000 household products, ranging from cosmetics to tennis racquets.  Perhaps even more interesting, though, are the more recent uses of nanoparticles in bone-replacing composites and chemotherapy delivery systems.

The amazing potential of these technologies can be readily appreciated just by considering the delivery of chemotherapy to cancer patients.  There are known substances that can effectively kill tumors in many cases, but current delivery systems amount to using them in a way that increases the toxicity in a patient’s entire body — essentially trying to find that line of toxicity that will kill the tumor but not the patient, who becomes incapacitatingly ill with effects that include nausea, hair loss, bleeding, diarrhea, and many others.  The use of nanoparticles to deliver the substances directly to the tumors has the potential of both increasing the effectiveness of the treatment while dramatically reducing the negative impact on the rest of the patient’s body.

This week, I had the privilege of discussing legal aspects of nanotechnology with Dr. Hildegarde Staninger on her broadcast at One Cell One Light Radio.  A copy of the broadcast can be found here.  During our discussion, we touched on the capacity of nanoparticles, by virtue of their extraordinarily small size, to intrude unexpectedly into the environment.  There are known health risks associated with nanoparticles, such as the triggering of autophagic cell death in human lungs caused by polyamidoamine dendrimers, and there are surely unknown health risks as well.  We also discussed government regulation of nanotechnology, specifically how the very breadth of applications for nanotechnology makes that process difficult and how instead efforts have been made to incorporate nanotechnology into the existing regulatory framework.

Interestingly, this week saw one of the first attempts to deviate from that approach.  At the Nanodiagnostics and Nanotherapeutics meeting held at the University of Minnesota, an invited panel discussed draft guidelines developed with the support of the National Institutes of Health to provide for regulatory oversight of medical applications of nanotechnology.  The final recommendations will not be available for some time, and the usual rulemaking procedures for administrative agencies to allow for public comment will need to be completed.  But the draft recommendations provide insight into how a nanotechnology-specific regulatory framework might develop.  Copies of papers by the group published earlier this year can be found here and here (subscriptions required) and the (free) report on the conference recommendations by the journal Nature can be found here.

Briefly, the group appears to be converging on a recommendation for the creation of two additional bodies within the Department of Health and Human Services — an interagency group that consolidates information from other government agencies in evaluating risks and an advisory body that includes expert members of the public.  These strike me as good recommendations, and there is no doubt that the group considering them has weighed the merits and disadvantages of developing an oversight framework specific to the concerns presented by nanotechnology.

As I mentioned to Dr. Staninger during our discussion, it is very much my belief that dialogues that educate the public about the real risks of nanotechnology — not fictional psychopathic nanobot swarms — are needed in developing appropriate and effective regulation.  There are risks to nanotechnology, just as there are with every technology having such enormous benefit, and realistic management of those risks is a part of the process of exploiting them to our benefit.

Tending Towards Savagery

I am no particular fan of the monarchy, but Prince Charles was given a bad rap in 2003 when he called for the Royal Society to consider the environmental and social risks of nanotechnology.  “My first gentle attempt to draw the subject to wider attention resulted in ‘Prince fears grey goo nightmare’ headlines,” he lamented in 2004.  Indeed, while yet somewhat misguided, the Prince’s efforts to draw attention to these issues were genuine and not far from mainstream perceptions that scientists sometimes become so absorbed with their discoveries that they pursue them without sober regard for the potential consequences.  A copy of his article can be read here, in which he claims never to have used the expression “grey goo,” and in which he makes a reasonable plea to “consider seriously those features that concern non-specialists and not just dismiss those concerns as ill-informed or Luddite.” 

It is unfortunate that the term “grey goo” has becomes as inexorably linked with nanotechnology as the term “frankenfood” has become associated with food derived from genetically modified organisms.  The term has its origins in K. Eric Drexler’s 1986 book Engines of Creation

[A]ssembler-based replicators will therefore be able to do all that life can, and more.  From an evolutionary point of view, this poses an obvious threat to otters, people, cacti, and ferns — to the rich fabric of the biosphere and all that we prize…. 

“Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage.  Tough, omnivorous “bacteria” could out-compete real bacteria:  they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days.  Dangerous replicators could easily be too tough, small, and rapidly spreading to stop…. 

Among the congoscenti of nanotechnology, this threat has become known as the “gray goo problem.” 

Even at the time, most scientists largely dismissed Drexler’s description as unrealistic, fanciful, and needlessly alarmist.  The debate most famously culminated in a series of exchanges in 2003 in Chemical and Engineering News between Drexler and Nobel laureate Richard Smalley, whose forceful admonition was applauded by many: 

You and people around you have scared our children.  I don’t expect you to stop, but I hope others in the chemical community will join with me in turning on the light and showing our children that, while our future in the real world will be challenging and there are real risks, there will be no such monster as the self-replicating mechanical nanobot of your dreams.

 Drexler did, in the end, recant, conceding in 2004 that “[t]he popular version of the grey-goo idea seems to be that nanotechnology is dangerous because it means building tiny self-replicating robots that could accidentally run away, multiply, and eat the world.  But there’s no need to build anything remotely resembling a runaway replicator, which would be a pointless and difficult engineering task….  This makes fears of accidental runaway replication … quite obsolete.”  But too many others have failed to take note, as sadly highlighted by this month’s bombing of two Mexican professors who work on nanotechnology research. 

Responsibility for the most recent bombings, as well as other bombings in April and May, has been claimed by “Individualidades tendiendo a lo Salvaje” (roughly translated into English as “Individuals Tending Towards Savagery”), an antitechnology group that candidly claims Unabomber Ted Kaczynski as its inspiration.  The group even has its own manifesto.  It is not as long as the Unabomber’s but is equally contorted in attempting to justify the use of violence as a means of opposing technological progress.  A copy of the original manifesto can be read here and an English translation can be found here

The manifesto references Drexler when it cites the absurd rationale for the group’s violence: 

[Drexler] has mentioned … the possible spread of a grey goo caused by billions of nanoparticles self-replicating themselves voluntarily and uncontrollably throughout the world, destroying the biosphere and completely eliminating all animal, plant, and human life on this planet.  The conclusion of technological advancement will be pathetic, Earth and all those on it will have become a large gray mass, where intelligent nanomachines reign.

 No clear-thinking person supports the group’s use of violence.  But at the same time, there are many nonscientists who are suspicious of the motivations that underlie much of scientific research.  One need only look at the recent news to understand the source of that distrust:  just this week, the Presidential Panel for the Study of Bioethical Issues released a report detailing atrocities commited by American scientists in the 1940’s that involved the nonconsensual infection of some 1300 Guatemalans with syphilis, gonorrhea, or chancroid.  There are many other examples where scientists have engaged in questionable practices with a secrecy that is counter to the very precepts of scientific investigation. 

“Nanotechnology” is a wonderful set of technologies that have already found their way into more than 1000 commercial products being sold in the electronic, medical, cosmetics, and other markets.  But even though the use of nanotechnology is spreading, many remain concerned that it is unwise to allow it, even if they would not go so far as to bomb the scientists working on the technology.  Here I find myself sympathetic with the real message that Prince Charles was attempting to spread — namely, that the concerns of the nonscientist public need to be addressed, even if those concerns seem to be ill-conceived.

The Most Noble Ends

“I’ve noticed that everybody that is for abortion has already been born.”

                                                                                           Ronald Reagan, 1980

 

It is no secret that Ronald Reagan took positions that were strongly opposed to abortion, at least at the time of his Presidency.  After he won the 1980 election, the first thing he said at his first press conference was that he would “make abortion illegal,” and he maintained a strong anti-abortion stance during his years as President.  He consistently opposed not only efforts to maintain the legality of abortion procedures in the United States, but also implemented peripheral policies that sought to advance the objectives of the so-called “pro-life” movement:  school authorities were required to notify parents if their children sought contraceptives at school clinics and workers at family-planning clinics who received federal funds were forbidden to present abortion as a medical option to pregnant women.  And, of course, he was opposed to stem-cell research. 

It is thus a particular irony that since her husband’s death, the most poignant of Nancy Reagan’s few comments on policy have been to advocate support for stem-cell research.  It was in 2004, just months after Ronald Reagan’s death, that she publicly responded to President Bush’s decision to limit funding for such research, criticizing his decision and expressing her opinion that too much time had already been wasted discussing the issue.  In 2009, she publicly praised President Obama for his reversal of the Bush policy. 

But Obama’s decision to lift restrictions on federal funding for embryonic stem-cell research has not been without consequences.  I commented several months ago about the case of Sherley v. Sebelius here in which District Court Judge Royce Lamberth held that the Dickey-Wicker Amendment prohibited federal support of such research, a decision that would have had even more impact on the federal funding of stem-cell research than even the Bush restrictions.  In light of the report in Nature last week that induced pluripotent stem cells — which, unlike embryonic stem cells, can be created without the destruction of embryos — might be rejected by a patient’s own immune system, it seems valuable to review what has happened with Sherley since last August.  A copy of the Nature paper can be found here (subscription required). 

The Dickey-Wicker Amendment is one that no Congress or Administration —Democrat or Republican —can credibly criticize since it has been passed by all as part of the Labor, Health and Human Services, and Education appropriations acts every year since 1995.  It has been passed not only with the signature of Republican President Bush, but also with the signatures of Democrat Presidents Clinton and Obama, after being enacted by both Democrat- and Republican-controlled legislatures.  After the District Court found that funding of embryonic stem-cell research violated the Act and refused to issue a stay until the appellate court reviewed the decision, the Court of Appeals for the District of Columbia itself stayed the action pending its review.  On April 29, the appellate court issued its ruling, vacating the preliminary injunction and allowing federal funding of embryonic stem-cell research to continue. 

In my earlier post, I commented that as “much as I personally support investigations into the use of embryonic stem cells because of their tremendous potential in the treatment of disease, I have difficulty faulting the Court’s decision.”  I found the language of the Dickey-Wicker Amendment unambiguous and dismissed attempts to parse it differently as “contrived.”  Two of the three judges on the Court of Appeals disagreed. 

The Dickey-Wicker Amendment states that “[n]one of the funds made available in this Act may be used for … (2) research in which a human embryo or embryos are destroyed, discarded, or knowingly subjected to risk of injury or death greater than that allowed for research on fetuses in utero ….”  The reasoning of the two-judge majority is one that had been rejected in the lower court, namely to parse what the statute means by “research.”  Essentially, investigations into embryonic stem cells require two phases:  a first phase in which embryos are destroyed and the stem cells are derived; and a second phase in which experiments are performed on the already derived stem cells.  The argument accepted by the appellate court is that federal funding of the first phase is prohibited but not federal funding of the second phase since only the first phase constitutes “research in which a human embryo [is] destroyed.” 

To reach this conclusion, the majority notes the use of the present tense in the statute (“are destroyed” instead of “were destroyed”) and consults some online dictionaries for definitions of the word “research.”  I am always wary of these kinds of analysis, which can give the impression of constructing a post-facto rationale for a decision improperly made for other reasons.  Such analysis is too much like relying on the exploitation of loopholes and technicalities instead of principled application of the law as it was written.  I therefore find myself sympathetic with the dissent’s criticism that the judges in the majority have performed “linguistic jujitsu” and “taken a straightforward case of statutory construction and produced a result that would make Rube Goldberg tip his hat.”  A copy of the full opinion and dissent can be read here

Researchers are generally pleased with the ruling, but in many ways that represents a short-sighted view.  The problem with the Dickey-Wicker Amendment is especially apparent when the procedural posture of Sherley as it now stands is considered.  So far, the only issue that has been resolved is whether experiments involving embryonic stem cells are “research in which a human embryo is destroyed.”  The case now returns to the District Court for consideration whether such experiments are “research in which a human embryo … is knowingly subjected to risk of injury or death greater than that allowed for research on fetuses in utero,” a question that potentially raises a host of different arguments.  While it is possible to take other procedural actions at this point — request a rehearing en banc by the full Court of Appeals or petition the Supreme Court to hear the case — those other actions almost certainly represent too much risk to stem-cell researchers. 

As difficult as it may be to do politically, it strikes me as much easier to find a way to avoid the annual ritual of having the Dickey-Wicker Amendment added as a rider to funding bills.