Share This Page

Knowing Sin
Making Sure Good Science Doesn’t Go Bad
Like all tools, scientific advances may be used for good or for ill. As our knowledge about the human brain increases, we will certainly use that knowledge to relieve human suffering in profound and wonderful ways. But the vast promise of the science should not blind us to the possibilities of its misuse.

Like all tools, scientific advances may be used for good or for ill. As our knowledge about the human brain increases, we will certainly use that knowledge to relieve human suffering in profound and wonderful ways. But the vast promise of the science should not blind us to the possibilities of its misuse. I believe those involved in human neuroscience need to pay attention to the risks that come with the science and to accept the duty to minimize any harm it could cause.
The Dark Side of Science: Some Historical Examples
Nuclear weapons were first detonated in July 1945 and first used in war the following month, killing more than 200,000 people in Hiroshima and Nagasaki. Controversy still rages over whether the use of the atomic bomb in Japan was justified. In the more than 60 years since, it seems impossible to determine whether the deterrence that came with possession of nuclear weapons by both sides of the Cold War saved more lives by preventing a massive “hot” war or cost more lives by prolonging the Cold War. And assessing the broader social and cultural effects of “the Bomb” seems unguessable. Today, the number of countries with nuclear weapons is continuing to increase, as are the risks that those weapons will fall into the hands of terrorists or other non-government actors. Each hour holds the risk that Hiroshima and Nagasaki will be displaced as the most recent use of nuclear weapons against people. Although the Manhattan Project led to beneficial medical uses of radiation and also to the controversial development of nuclear power, on balance, the verdict on nuclear weapons should probably be the same as that Chinese premier Zhou Enlai is said to have given when asked about the consequences of the French Revolution, which had taken place more than 150 years earlier: “It is too soon to say.”
Physicists were intimately involved not only in creating nuclear weapons but in advocating for them, as in the August 1939 letter from Albert Einstein and Leo Szilard to President Roosevelt that sparked the federal government’s interest in such weapons. Some of the scientists involved in the development of nuclear weapons were unapologetic about the negative effects of their work, others were deeply troubled, and many were ambivalent.
What the physicists did was, at least, “good science,” however one judges the results. By contrast, I believe that in eugenics geneticists were responsible for a moral fiasco based on bad science. The eugenics movement was started by Francis Galton, Charles Darwin’s cousin, who believed that human evolution was beginning to move backwards. In Victorian England, he saw “bad” parents having too many children and “good” parents too few. Galton’s answer has come to be called “positive eugenics,” encouraging “good” parents to have more children. In some places, this quickly changed into “negative eugenics,” preventing “bad” parents from having children.
In 1907 Indiana became the first American state to pass legislation requiring sterilization of the allegedly inferior parents; over the next 20 years, 30 more states followed it. Before eugenics disappeared in America after World War II, about 60,000 men and women were surgically sterilized by court order, for conditions such as feeblemindedness, alcoholism, insanity, epilepsy, and criminality, which have little or no genetic basis.
Gregor Mendel’s groundbreaking genetics work with pea plants, published in 1866 and ignored, was rediscovered in 1900. It was then quickly and easily applied to humans—too quickly and too easily. American geneticists, led by Charles Davenport at what became the Cold Spring Harbor Laboratory, decided that the simple Mendelian framework of inheritance was important in a wide range of medical conditions, as well as traits such as “pauperism,” “nomadism,” “shiftlessness,” and “thalassophilia” (the love of the sea). This was bad science, extending new and valuable ideas into areas powerfully affecting human lives without rigorous examination or proof.
Scientific opinion began to turn against this grossly oversimplified human genetics during the 1930s. After World War II and the revelation of the depths of Nazi use of eugenics, the movement withered away— too late, of course, for its victims. The eugenics movement extended well beyond scientists, with substantial support on both ends of the political spectrum. But the prestige and authority of science were indispensable to its influence.
Neuroscience arguably had its own ethical disaster in the widespread adoption and use of the surgical procedure usually called the “prefrontal lobotomy.” In 1936, Portuguese neurologist Egas Moniz developed a procedure he called the prefrontal leucotomy to sever the connections between the frontal lobe and the rest of the brain.
Moniz initially used it mainly to treat patients with depression or other affective disorders. But its application was expanded rapidly and recklessly, particularly by Walter Freeman in the United States, where it is estimated that he and his followers lobotomized more than 40,000 patients in the 1940s and 1950s. Moniz won the Nobel Prize in Medicine for his invention in 1949, but by the late 1950s the lobotomy had fallen out of favor. More attention was paid to its side effects, which often included major damage to the personalities and intellectual powers of patients, and alternative treatments for depression and schizophrenia became available. Although some speak positively of Moniz’s work in the 1930s and 1940s, at least for carefully selected patients, the widespread use of lobotomy in the United States in the 1940s and 1950s seems indefensible.
Risks in Contemporary Brain Science
The development of nuclear weapons involved the ethically ambiguous use of good science. Eugenics was the unethical use of bad science. And the lobotomy can be seen as the extension of pretty good science past its medically, and thus ethically, appropriate limits. All three types of problems could affect the applications of neuroscience to society.
Human biological enhancement is a controversial issue on many levels, from cosmetic surgery to professional sports to selection for genetic traits. Because of the brain’s importance, its enhancement raises particular concern. For millennia, humans have used psychoactive substances, from alcohol to nicotine to caffeine, to enhance mental properties, but brain enhancement is moving into new territory. Consider just two examples, one of bad science and one of good science extended too far.
Today consumers are buying Ginkgo biloba, choline, acetyl L carnitine, and other nutritional supplements to improve their mental functioning. These compounds have no benefits proven by rigorous clinical trials; at best, their claims are based on limited or bad science. Yet some risks have been proven, including bleeding complications from Ginkgo biloba for some people and unsafe drops in blood pressure from choline. As a result, consumers are taking unknown risks for scientifically baseless but well advertised benefits.
On the other hand, healthy and ambitious high school and college students increasingly are using prescription drugs, not to relax or get high, but to help them study. The prescription stimulants Adderall and Ritalin have important and well-documented uses in treating people with attention problems, disorders serious enough to justify the risks of these drugs. But their non-prescribed use by students in an effort to improve normal functioning applies good science past its limits, into an area where the benefits of the drugs do not justify their risks. The promised new wave of drugs to treat neurological diseases, such as the many drugs in clinical trials for Alzheimer’s disease and other memory disorders, will raise a new set of issues about appropriate applications of good science.
Perhaps the most promising, and the most unsettling, area of neuroscience comes from the explosion of neuroimaging tools, which let us watch the workings of living human brains. If, as neuroscientists have come to believe (they have convinced me), the mind is produced by the functioning of the brain, close enough examination of a person’s brain may allow us to know some aspects of that person’s mental state—in a rough sense, to read his mind. This mind reading seems unlikely ever to reach the level of detailed thoughts, but it may already be adequate for deciding whether someone is feeling pain or anger and why. Ultimately, it may be able to say much more, including whether someone is lying.
Already, two commercial firms, Cephos Corporation and the bluntly named No Lie MRI, have announced that in 2006 they plan to offer lie detection services based on magnetic resonance imaging (MRI). Each company has licensed technology based on peer-reviewed publications by respected neuroscientists. The few published research reports, however, are based on small studies, whose subjects are usually college students presented with artificial problems. I think few outside observers believe that the effectiveness of these techniques in the real world—with real people, telling real lies—is close to established. But neuroscience-based lie detection is entirely unregulated. Soon people’s lives may change, for better or for worse, because of a scientifically questionable but financially rewarding interpretation of a brain scan.
The problems caused by inadequate science, though real, pale compared with the challenges that would be posed if lie detection were scientifically valid. Effective lie detection would be the negation of what has been an effective, if not celebrated, right—the right not to tell the truth. The possible invasion of that intimate method of privacy raises a host of hard questions. Should it be allowed at all? If so, for what purposes and by whom? By the intelligence community to search for terrorists? By the military for field intelligence? By the police to look for burglars? By the government to identify illegal immigrants? By teachers who want to know if the dog really did eat the homework? And for each allowed use and user, should procedural protections be required, such as truly voluntary consent or a judicial warrant? Good science could well be used here for bad ends.
Like mind reading, mind control is an inflammatory term, but our power to do just that is keeping pace with our progress in seeing into the brain at work. Some reasons to develop mind control lie in the laudable goal of helping the mentally ill, whether by lifting crippling depression or preventing psychotic hallucinations. Our increased understanding of the brain will lead to an increasing ability to “adjust” it, through pharmaceuticals or devices, thereby allowing psychologically afflicted people to lead more normal lives. The dangers of implementing such methods too soon, on insufficient evidence of safety and efficacy, always exist, as do the dangers (exemplified by the history of the lobotomy) of implementing them too broadly. And safe and effective applications may be, in some circumstances, the most frightening.
Another set of questions concerns what we should allow people to do voluntarily to their own personalities—and indeed when and whether such decisions are voluntary. But coercion raises even harder issues. The Supreme Court has already allowed mentally ill prisoners to be forcibly medicated so that they can be sane enough to be executed. Should drunk drivers be stripped of the ability to enjoy alcohol, and hence the temptation to abuse it? To what extent should people be “cured” of what they consider to be their personality traits? Should parents be able to use neuroscience to “adjust” their children, something some critics think is already happening with prescription drugs but that new techniques might make more powerful? As a parent of two teenagers, I can imagine the attraction of pills to “help them” clean their rooms or do their homework before the last minute. On the other hand, should the state be allowed to interfere in how parents choose to raise their children? What of the government, near or far, that might use neuroscience to make dissent disappear —not through the bread, sex, and soma of Brave New World or the propaganda and torture of 1984, but with a little blue pill? These are not new issues or new fears, nor do they have clear answers, but the rush of progress in neuroscience gives new importance to finding workable and ethical answers to them.
These are just a few of the many ways neuroscience will raise hard questions for our society. Whole books have been written about neuroethics and its dilemmas. Some of the examples I have discussed may not come to pass; I suspect they will be outnumbered by the problems that no one has yet imagined.
Primum Non Nocere
I do not argue that neuroscience research should be stopped, or slowed, even in areas that might lead to abuses. To hold back on such promising research would raise its own set of moral problems. Neither do I argue that researchers must bear full responsibility for the consequences of their work, any more than parents can be charged with the full moral burden of the acts of their children. Predicting the future is easy, and, if done humbly, useful; predicting the future accurately is impossible. The engineers who created the de Havilland Comet, the first jet airliner, could not have anticipated and cannot be held responsible for September 11.
But researchers can be asked to think about the implications of their work and to take reasonable steps to prevent negative consequences, for individual research subjects and for society as a whole. We cannot make primum non nocere, “first do no harm,” a binding obligation; too often, in spite of the best motives and the most expert execution, harm will occur. But for researchers as well as for physicians, doing no harm can be an aspiration. And that aspiration can encourage us all to think about the ethical, social, and legal consequences of our work.
In February 1975, most of the leading researchers working on recombinant DNA technology, the basic method of genetic engineering, met at the Asilomar Conference Center in California and declared a moratorium on their own research until questions about its safety were answered. This meeting is a powerful example, though also a controversial one. Some argue the conference did too much and held back important research; others insist that it did too little. But no one can deny that molecular biologists, having learned from the experience of nuclear physicists, did face directly at least some of the possible risks of their work.
The potential benefits from neuroscience are breathtaking, but so are some of the potential harms. The increasing talking, writing, and—most important—thinking being done about neuroethics issues is encouraging. This kind of work runs the risk of getting too far ahead of the science and fruitlessly piling speculation on top of conjecture. But if done carefully and modestly, with a solid grounding in the science and an appreciation of both the scientific and the social uncertainties, thinking about neuroethics may help us maximize the benefits and minimize the harms of the revolution in brain science. All of us interested and involved in neuroscience should feel a duty to try to accomplish those ends. I believe it is by no means too early for such a commitment. I am optimistic that it is not too late.
References
- J. Robert Oppenheimer, “Physics in the Contemporary World,” Arthur Dehon Little memorial lecture at the Massachusetts Institute of Technology, November 25, 1947 (Cambridge, MA, 1947). Oppenheimer’s lecture, with this quotation, was also published in 4 Bulletin Atomic Scientists 65, 66 (March 1948). He used the same language in an interview with Time magazine: “Expiation,” Time, p. 94 (February 23, 1948).
- Michael S. Gazzaniga, The Ethical Brain (Dana Press, 2005)
- Judy Illes, ed., Neuroethics: Defining the Issues in Theory, Practice and Policy (Oxford University Press, 2006)