Share This Page

Three Nobelists Ask: Are We Ready for the Next Frontier?
Three men whose research in the second half of the 20th century helped to launch a revolution in biological science shared the 2002 Nobel Prize in Medicine. Sydney Brenner, John Sulston, and Robert Horvitz are impassioned explorers on the frontiers of brain science. Science writer Victor McElheny, who has reported developments in molecular biology for more than 40 years, asked each of the three scientists to talk about what is next in the quest to understand the human brain—and whether today’s science is ready for the challenges.
Now that the human DNA sequence has been completed, biologists have heightened hopes for rapid increases in understanding the brain. But, in the view of the 2002 winners of the Nobel Prize in Medicine, these hopes will not come to fruition unless researchers can break down stubborn intellectual boundaries and adopt new ways of thinking about how cells work.
Like many scientists, Sydney Brenner, John Sulston, and Robert Horvitz are deeply concerned about intellectual walls that are too high and that, they fear, could particularly hinder the sciences of the brain and nervous system. They say that an overriding challenge is how to “compute” the building and functioning of an animal’s nervous system from the animal’s genes. Doing this computing is expected to require insights from cancer research as well as neurophysiology, genomics, and computer science. Scientists determined to exploit the growing inventory of genes and proteins to understand the brain must talk each other’s language. Can this happen soon enough, or do we risk stalling on the edge of vast new domains?
Brenner, Sulston, and Horvitz were honored by the Nobel Prize committee for their discoveries in the tiny worm Caenorhabditis elegans (or C. elegans)—discoveries that have had enormous impact on brain science. Interviewed in California, England, and Massachusetts, they talked about today’s biology with a mixture of intense hopes and persistent worries.
The 2002 Prize
The 2002 Nobel Prize was exciting because it honored decades of work that wove together many of the threads of the continuing revolution in biology. Working in the Medical Research Council’s Laboratory of Molecular Biology in Cambridge, England, Brenner and his early recruits to the arcane world of the worm used chemically induced mutations and painstaking studies with electron microscopes to ascertain the location of every cell in the organism and to decipher the complex wiring diagram of the worm’s 302 neurons. Sulston used light microscopes to study the worm’s development from egg to adult, a process measured in hours. He traced the lineage of every one of the 959 cells of the adult worm, more than 100 fewer than in the immature worm.
Why did apparently healthy cells die on the way to maturity? Sulston enlisted Horvitz (who came to Cambridge, England, from the Harvard laboratory of James Watson) in his studies of the fate of the C. elegans cells. Horvitz went on to discover genes in C. elegans that direct the programmed suicide of cells, a process now thought to be underactive in cancer and overactive in neurodegenerative diseases. In the late 1980s, Watson, the first director of the U.S. Human Genome Project, selected C. elegans for the first major push to sequence all the genes of an animal. Sulston, in collaboration with such colleagues as Robert Waterston (then at Washington University in St. Louis), put the worm to work as the path-breaker in mapping the human DNA sequence.
Knowing Everything about a Worm
In harnessing the worm as a multicell equivalent to the single-cell bacterium Escherichia coli (or E. Coli), which until then had dominated molecular biology, Brenner, Sulston, and Horvitz helped to carry molecular biology into a new era. It was an era that would build on the triumphs of what Brenner calls the “platinum era” of the late 1940s and beyond, when bacteria and the viruses that prey on them had played a key role in research—not least in the Watson-Crick discovery of DNA’s structure in 1953.
Bacteria and viruses had been the main tools in detailed gene analysis by pioneering geneticist Seymour Benzer, as well as in the discovery of transfer and messenger RNA and of the first proteins that regulate expression of genes. In bacteria, too, the genetic code was unlocked: Triplets of DNA’s four component bases—the famous A, T, G, and C—specify the order of amino acids that make up the proteins synthesized on ribosomes in the cell’s cytoplasm. In less than a decade after the DNA double helix was discovered, Brenner and François Jacob (of the renowned microbial genetics group at the Pasteur Institute) led in proving the existence of messenger RNA, which carries DNA’s message to the site of protein manufacture. Brenner and his Cambridge office mate, Francis Crick, proved that the genetic code was expressed in triplets.
But after the A-T-G-C code had been cracked and protein-making mechanisms had been worked out, Brenner and Crick were sure that it was now time to move up to the vastly more complex world of multicellular organisms. Yet another cycle began of estimating what were the next grand questions that were ready to be tackled by research. It was an exciting crossroads, not so different from today’s crossroads, and it was the newly available tools that would define the directions scientists would take. As Brenner once put it: “Progress in science depends on new techniques, new discoveries, and new ideas, probably in that order.”
The problem of how the information in DNA leads to the amino acid sequence of proteins, as Brenner describes it, was one-dimensional. The next peaks to climb were clearly multidimensional: the brain and development of multicellular organisms, particularly of their nervous systems. As Brenner delights in saying, “Work could not wait until perfect methods appeared.” Despite the scorn of Watson, who considered the idea 20 years premature, Brenner was convinced that, by finding a model organism scientists could know everything about, they could harness molecular biology to study development.
After a weary search, Brenner fixed on his multicellular equivalent of E. coli. It was the squirming hermaphroditic soil-dweller C. elegans, a transparent creature a millimeter long that develops from an egg to a mature adult in 14 hours, produces about 300 progeny, establishes about 5,000 interconnections of its neurons, and has a lifetime measured in days. Brenner and his colleagues plunged into the vast problems of how the DNA of a single fertilized C. elegans egg specifies that egg’s development into hundreds of types of specialized cells (including neurons)—each with the same endowment of DNA—that know just which of their genes to switch on or off. Brenner became the fanatic pioneer in exploiting the study of C. elegans to slash into the tangled thicket of development. Although he was ignored or dismissed as a crank, Brenner stuck with it and won.
Turf Busting
Brenner’s prominent eyebrows and big ears, great skill at devising crucial experiments, and frequent use of humor have captured attention for decades. Now 76, he divides time between England, Singapore, and the Salk Institute in La Jolla, California, and juggles speaking, writing, studying up on evolution, and the problem of describing what kind of machine a cell is. Brenner complains, “We are drowning in a sea of data and starving for knowledge,” and he laments, “Today, biology is more about gathering data than hunting down new ideas.”
His view of the human genome sequence is jaundiced. At an international genetics conference in Melbourne, Australia, in July 2003, he noted the frequent comparison of the genome project to putting a man on the moon. Getting a man there was the easy part, said Brenner, “what’s hard is getting him back again.” To Brenner, the key job now is working out how genes are regulated, switched on or switched off, in different organs of the body at different stages of life. He wants today’s gene-obsessed scientists to expand the boundaries of their thinking and to remember that “the real units of function and structure in an organism are cells and not genes.” Neither top-down nor bottom-up analysis will connect the genes to the cell, he argues. Instead, as they search for a new theoretical biology, researchers must look at the cell “middle-out”—up to the work of the whole cell and down to the genes.
A different aspect of information-flow preoccupies Sulston, whose trim pepper-and-salt beard gives him the appearance of a friendly sea captain. He retired in 2000 as director of the major gene sequencing center that was established in Hinxton, England, and named for Frederick Sanger, pioneer of sequencing both proteins and nucleic acids. Sulston, who still maintains a small office at Hinxton, described his continuing campaign for open intellectual boundaries, particularly by keeping biology databases, such as the genome sequences of humans and many other species, in the public domain. As he did while leading the English part of the worldwide human genome sequencing campaign, Sulston argues tirelessly for such openness. With the science writer Georgina Ferry, he has written a book about the topic, called The Common Thread, and he spoke about his concern during an April 2003 conference in Cambridge, England, that celebrated the 50th anniversary of publication of the first papers about DNA’s structure.
Without open intellectual boundaries, Sulston is certain that unknown scientists— who very often come up with crucial solutions—will be shut out. In his Nobel lecture in Stockholm, Sulston said, “Proprietary databases don’t work for such basic and broadly needed information as the sequence of the human genome.” He told a New York Times reporter, in 1998, “The public needs a structure that will serve biomedical research for the rest of time.”
Openness also has an advocate in the thin, intense Horvitz, now at the Massachusetts Institute of Technology (MIT). In a recent interview, Horvitz worried aloud that huge, obvious, immediate opportunities in brain science—what he calls “the biggies”— may go unexplored unless young scientists are trained to surmount the traditional boundaries—and jargons—of entrenched academic fields and become “polylingual.”
Horvitz’s own work pushes him across these boundaries. He divides his time between cancer research (35 percent) and neurobiology (65 percent). “I interface with the cancer community at least as much as the neuroscience community—but that’s because biology doesn’t draw lines the way many scientists do.” Arguing that crucial topics affect many areas of biology, Horvitz cites “how cells talk to each other. It’s fundamental in growth control for cancer. It’s fundamental for signaling in the nervous system.” Biology doesn’t designate genes for just the nervous system, he said, “There’s an enormous amount of overlap.” The knowledge that neuroscientists need will not be found in traditional brain science alone but in molecular biology, genetics, physics, and chemistry. It also comes from mathematics applied to biology through what is called “bioinformatics.” Horvitz said that many scientists appreciate this intellectually, but “relatively few people know it in their souls.”
Horvitz spoke in an office stacked with piles of paper that had accumulated since the announcement of the Nobel Prize. He said that the current buzz about “systems biology” points strongly to the overall importance of engineering for neuroscience and not merely for the imaging that allows brain function to be followed in real time. “The ultimate systems biology is the nervous system,” said Horvitz. Engineering is “a way of thinking about [such] problems.”
According to Horvitz, bringing all this information to bear on basic questions of perception, learning, memory, emotion, cognition, language, consciousness, and behavior is daunting. “None of us today has been trained to know all of these things.” Indeed, how can one train people to do all these things, when doing them would take “more than a lifetime”? Scientific cultures that have been completely distinct must be brought together. “We have more than an opportunity, we have a responsibility to do this,” said Horvitz. To him, “there are still far too many walls, [and old and young scientists] have a focus that limits their horizons and interactions.” Many universities, including MIT, lack a department that is capable of training a student broadly in the field of neuroscience. “Departments tend to follow traditional lines, [yet] the future of neuroscience does not follow those lines whatsoever.” Neuroscientists of the future must be trained “without blinders,” but, although institutions like MIT have an opportunity to start doing this, it’s not clear that departmental turf defenses will allow it.
Neuroscience: The History of the Future
The three Nobelists are united in agreeing that, however critical, studying the genome alone does not offer the excitement—and potential—of neuroscience today. Sulston predicted that by the end of this century the human brain would unravel the complexities of the human mind. Back in 1975, Brenner told a historian interviewing him: “Like most other scientists, I’m not very interested in history—at least of the past. I’m interested in the history of the future.” For Brenner, Sulston, and Horvitz, that history surely centers on the brain.
At the Melbourne genetics conference, Brenner proclaimed: “The brain is mightier than the genome.” As he has for years, he derided the human genome, of which less than five percent is expressed in the structure of proteins. He favors the “discount genome” of the puffer fish, Fugu, which has about the same number of genes as humans, but in a genome eight times smaller. He led the effort in California and Singapore to decipher the entire Fugu genome, a proposal that the Human Genome Project had rejected. With all that extra DNA in humans, Brenner scoffed, “You could call it survival of the fattest.”
In a talk at MIT in March 2003, Brenner joked that some people want to find out the gene for language by simply subtracting the chimpanzee sequence from the human sequence. He tagged the gene for language as the “Chomsky,” after Noam Chomsky, the renowned pioneer in linguistics. But suppose chimps have discovered that you get into trouble if you talk? Maybe there is a “Chimpsky” gene—that suppresses language.
Brenner also told the audience that he had tackled C. elegans “to get at a nervous system,” an impulse similar to that which drove Seymour Benzer to the genetics of behavior in Drosophila fruit flies, Gunther Stent to the leech, and Crick to vision and, later, consciousness. The brain, Brenner said, “is the most complicated object in living organisms. How do genes map onto the phenotype in the brain?” As always in science, the primary problem was to define “the least we need to know” to formulate an explanation. One cannot study behavior in all circumstances, so one asks, instead, whether one can predict a behavior from a wiring diagram that focuses on hard-wired neural traits.
Many who contemplate this problem are impatient to dive into the complexities of brain function in animals far more elaborate than the worm, which just crawls around and eats and produces offspring. Brenner recalled for the audience a group at MIT years earlier, trying to simulate the motion of the worm. But if they found they had mis-predicted a move, they couldn’t change the program. In Brenner’s view, they “had created a cockpit in the brain, trying to drive the worm,” but he does not believe there is a big computer in the brain that drives it. Biology, he said, is unlike mathematics, which he called the “art of the perfect solution.” Instead, “biology is the art of the satisfactory. If it works, it works; if not, forget about it.” For Brenner, there are two kinds of computers, one driven by programs to calculate answers, the other occupied by developing tables—like those of logarithms—where one simply looks something up. In biology, evolution calculates the tables and is not controlled by commands from above.
Genes and Mind
Like Brenner and Sulston, Horvitz is concerned about the limits of the human genome’s ability to illuminate how life works. The sequence, he said, is “a very long book, written in a four-letter alphabet in a language we don’t know.” Reading the genome lies ahead. Pointing to his head, Horvitz told an Australian reporter, “If you want the next frontier, [it’s] what’s going on up here.” He added, “We know that what makes us human is fundamentally in our genes. You take a set of human genes, you get a human being; you’re not going to get a koala, and vice versa. But what is it that makes up our mind? How do genes interface with the brain? How does the brain work? What gives us our abilities to deal with the world, and to respond?”
To Horvitz, communication among separate fields is indispensable in attacking the problem of what makes us different from chimpanzees, but those fields “don’t talk to each other.” One group is neuroscientists who are trying to deal with higher-level issues of brain function, such as speech.
To scientists focused on genomes, “the readout of the genomic difference is clearly what’s responsible for the difference in organisms.” He recalled encountering highly capable neuroscientists who flatly denied that they even had the vocabulary to cooperate with people in genomics.
The need for such cooperation shows up in the problem of harnessing genetics to the real-time imaging of human brains at work. The images differ from person to person and probably from family to family. netics could help explain the differences, but there’s still a fundamental divide between fields.
“Some people tell you that understanding genes would be fundamental to understanding any biologic process. You have other people who will tell you that understanding genes and molecules and even cells will prove to be irrelevant to the interesting problems of the nervous system,” says Horvitz. The latter group, concentrating on “systems neuroscience,” is daunted by the many steps that lie between the genes and behavior. According to Horvitz, some neuroscientists, on the one hand, “think that all of basic biology is at a level that is not important or interesting for a real understanding of brain function.” Some molecular biologists, on the other hand, “think that what the systems neuroscientists do is too complicated to understand and not sufficiently reductionist to ever be relevant.” To Horvitz, “Each of them has the knowledge, each of them has a bias, and both of them are limited. So you want people who are thinking about both.” He hopes places like the McGovern Institute at MIT will promote such conversation.
One of the things at stake here, in Horvitz’s view, is that failure to focus on molecules will prevent discovery of potential targets for drugs. In tackling neurologic disease, the team must include experts in disease, in the workings of the brain, and in genetics as “a route into the molecules.” In worms, every synapse in the wiring diagram is known. But in higher organisms, “we still don’t know how many cell types there are, let alone how they’re all connected.” Collaboration between fields is essential, first, to describe the system and then to analyze it. Genomics could help by profiling “transcripts and proteins in the brain in detail during development, during learning, during other responses to the environment or to experience. You could give a much higher resolution anatomy that will help you tell cell types.”
Neuroscientists are only beginning to think in the same way as cancer biologists, who are using DNA chips and other means of telling which genes are turned on in a cancer cell and which are turned off. These “expression profiles” are finding slight variations between forms of cancers that once were thought to be a single type— even though patients responded very differently to the same treatments. This genomic pathology, which achieves a precision that exceeds that achieved from looking though a microscope, is beginning to allow oncologists to steer patients toward the therapy that is right for them. Horvitz praised “the very detailed fingerprint [that will] lead to a revolution, first in cancer diagnostics and second in cancer therapeutics.”
“Brain biopsy is tough,” Horvitz noted. “You can get a little bit of tumor much more readily than you can go in and get a little bit of the Niagara [of noncancerous cells]. If you’re interested in Parkinson’s, you need other kinds of readouts. All of this screams for an interdisciplinary expertise, essentially a breakdown of the Tower of Babel.”
Are We Ready for the Opening Game?
When Brenner discussed his pufferfish genome project with a New York Times reporter a few years ago, he spoke with disgust about a “Stalinized period” in the genome project, “when anyone who wanted to do things differently just didn’t get funded, so anything innovative was denied. They were like the people who came to Christopher Columbus and said, ‘Why try and cross the Atlantic with wooden boats and Spanish sailors when you could just wait for 500 years and get a cheap flight?’ ”
In his Nobel lecture in Stockholm, Brenner said that the need for model organisms to study human biology had disappeared. Once cloning and sequencing took hold in the 1970s, “the new technology liberated genetics from the tyranny of the reproductive cycles of organisms.” Experiments with genes from Fugu showed that they were read the same way when they were transferred into mice. Hence, one should strive now to invent “the means of accurately analyzing large populations of [human] genomes for detailed studies of natural human genetic variation and its correlation with phenotypes of health and disease. I believe that this will be the major challenge in human biology in the next decade.” Mice will be used to validate the human results. “We already have large numbers of diverse genomes, with skilled and expensively trained phenotypers, called doctors, studying them,” says Brenner, who wants to work on a subset of the 500,000 DNA profiles matched with health records that are being gathered in England’s BioBank project.
Evidently, Brenner is impatient to move on to the next big thing. In 1996, he told reporter Jon Cohen of Science magazine: “I don’t like the middle game of science. There are only two games worth playing, the opening game and the end game. And it’s given to very few of us to play the end game, so I like to play the opening game.”
He, Sulston, and Horvitz all appear convinced that to think anew about development in general, and the brain in particular, scientists must buck one of the most powerful trends of the past century—specialization— and find ways to smash the boundaries that it has thrown up. Only then can the next opening game begin.