Share This Page

Fearful Symmetry
Probing the Limits of Brain Modeling
One way to do research on the brain, avoiding many of the problems of human or even test-tube research, is to create a computer model—say, of a group of neurons—and see how it acts. This tempting end-run of the biological brain has paid off, so far: Behavior first observed in the model has sometimes been found in the brain. Perhaps, at the end of this promising shortcut, we will arrive at a working model of the brain itself. But if, just for argument’s sake, scientists did achieve that kind of success, writes the author, we would be confronted with choices scarcely conceived outside the realm of science fiction.
Neuroscience is unlike every other science in that we cannot wish it unqualified success. We have no reason not to want to know the last secret of the earth’s weather or geology, but it is easy to imagine that knowing exactly how our brains work might well bring with it some darker consequences. Such information could end up altering our sense of ourselves and of each other, and not necessarily in the direction of increased comfort.
At the moment, of course, we cheer the team on. The scary part of this adventure seems far off, because the job seems so immense. The brain has a hundred billion neurons, each with a thousand or so synaptic connections. Not much is known about any of it. Right now, reducing our ignorance mostly means working with live cells with all their cantankerous unpredictability. Concerns about what might happen when this knot is finally untied are easy to put off. And yet, it is worth considering that neuroscientists have a tool in their kit that just might drop the problem of success in our lap sooner than we think. That tool is the computer simulation, or, in this context, the brain model.
As used in scientific research, computer simulation means the translation of physical systems into information systems; that is, into messages, programs, archives, and networks. In a computer simulation of a ball bouncing off a wall, a program representing the ball would tell a program representing the wall: “X units of force have been transmitted to you at the following coordinates.” The wall program would decode the text, determine which aspects of itself would be affected by the interaction with the ball, boot up whatever programs were needed to calculate the consequences, look up the list of parties needing to know the results, and distribute them accordingly.
Computer simulations are a tool of immense importance in almost every science, from cosmology to economics. They permit the testing of theories of operation of systems comprising immense numbers of variables (a cosmology simulation might contain billions of objects) and the subjecting of these theories to “what if” experiments of a rate, variety, and economy that would be impossible in either vitro or vivo. They constitute the seeds of a new publication medium, integrating results from many laboratories into a coherent presentation that is more accessible, because more interactive, than print. (They are for the same reason a fabulous educational tool.) “Virtual” experiments are never derailed by a bit of dirt in a poorly washed vessel or a confusion over the exact sequence of steps in a procedure or an instrument breaking at the wrong moment. Granted, there are software bugs to deal with, but that swap is usually well worth making.
In no field are these virtues more relevant than in neuroscience. If the landscape that neuroscience investigates were any more delicate, noisy, inaccessible, and complex, nobody would have the heart to even try to explore it. Simulations simplify access to such realities. In the case of neuroscience, the behavioral details of simulated neurons are infinitely easier to record, inspect, and test than anything in a Petri dish could ever be. To take just one instance, biological neurons run at only two speeds: their own and dead. Simulated neurons run at any speed that is convenient for the experimenter.
Self-Organizing Sets of Neurons
Right now, simulations are used mostly to make high-cost live-cell laboratory work more efficient. For instance, recently Nathan Urban, Ph.D., a professor of biological sciences at Carnegie Mellon University, together with G. Brad Ermentrout, Ph.D., professor of computational biology at the University of Pittsburgh and a specialist on the mathematics of biological synchrony, and Roberto Galán, Ph.D., a post-doctoral student at Carnegie Mellon, were in need of a way to test a theory they had developed about neural self-organization.
They might have tested their theory by going into the laboratory, separating out some cells, sticking electrodes in them, introducing signals, and recording the response. Such experiments are, as stated, major investments: frustrating, fragile, and very time consuming. Typically, they yield a farrago of data that is very hard to decode, especially when, as here, the researchers have no idea what a hypothesis they are testing for might actually look like in real life.
So, instead, the group wrote a piece of virtual nervous tissue: a system of interacting formulae, each of which accepted inputs crafted to resemble real pulses from real synapses (on measures like timing, amplitude, frequency, and so on), processed them in ways that mimicked the behavior of real neurons, and passed the resulting outputs to other formulae simulating other neurons. The hope of the enterprise was that if the virtual cells mimicked the theory in question, the group would have a little more confidence they were onto something real and they would know better what to look for.
Urban’s research group was investigating how neurons organize themselves, perhaps the fundamental question facing neuroscience today. The specific case of organized behavior chosen here was synchrony, a relatively rare behavior in which a sub-population of neurons learns to fire in step. Synchrony has been associated with certain brain functions, including coding sensory information and forming short-term memories. Robert Disimone, Ph.D., director of the McGovern Institute for Brain Research at Massachusetts Institute of Technology, has suggested that synchrony might also allow local regions in the brain to attract the interest of the whole organ, on the model of a group of fans chanting in unison in a stadium. For all these speculations, however, the processes that initiate and maintain synchrony are still largely unknown.
Usually, in nature, including human activities, such behaviors are imposed by a central leader, a pace maker, that acts like a clock and is directly connected, in parallel, to each of the agents to be synchronized. Yet, so far, nobody has found any “master sergeant” neurons organizing and maintaining synchrony in the brain. In most cases known to date, synchrony springs up from below, emerging from the interactions of cells. It was this bottom-up behavior that made synchrony such a useful model of neural self-organization.
Urban and his team posited that each neuron in a synchronized group has the ability to measure and remember whether its nearest neighbor fired just before, just after, or synchronously with its own signal. If the neighbor’s signal arrived before, the cell adjusts its firing cycle backwards (the next spike is triggered a little sooner than usual, but thereafter the firing rate cycles as usual). If after, ahead. If the two events occurred together, nothing changes. Ermentrout had showed that given certain plausible assumptions about the nature of these corrections, a group of neurons would converge on a single clock, continuously adjusting in the direction of synchrony. The neurons would have organized themselves by listening to each other like the members of an orchestra playing without a conductor.
The obvious next question was whether this theory could be found at work in the lives of real neurons. As stated above, the team tested the theory in simulation.Sure enough, synchrony emerged. This success gave them the confidence to check the results against real tissue. They took a slice from the olfactory system of a mouse, applied the test stimulus, and looked for the resetting behavior called for by the theory and found in the simulation. The behavior was not there.
Simulations fail such tests all the time. When they do, investigators have to decide whether the lab work, the software, or the governing concept is the problem. All candidates are plausible. The cells in the Urban simulations were built around the properties of “average” human neurons. Perhaps mouse neurons did not have enough in common with human neurons. The cycles generated by the model were all identical, with the firing appearing at exactly the same point in that neuron’s cycle (except when it was resetting), and with exactly the same number of spikes per unit of time. In biological reality, these cycles are only roughly identical. Perhaps that roughness mattered. There were many other possibilities.
Urban’s experience, however, led him to trust the simulation. He decided the signal was still in there somewhere, but that the in vitro experiment had just missed it. “The [simulated] phenomenon had been really robust,” he says. “You could change lots of things and get the same results. This gave us the confidence to press on.” The team started to analyze the lab experiment, particularly the finer details of the physical impulse supplied to the test neurons: the shape of the charge, the size of the peak, and so forth. As part of this examination, they passed different frequencies for the stimulus impulse through the model. Satisfyingly, some did not work, predicting the failure that had emerged in the lab. They then redefined the real stimulation of the tissue in the laboratory dish to express a more accurate model of the impulse frequencies that worked in the simulation. This time, the expected resetting emerged.
This episode shows the technique of simulation to advantage. Neuroscience advanced because one theory explaining the origin and management of a crucial behavior gained strength. When problems arose in the lab, the simulation was used as a kind of flashlight to probe the complications. A bonus was found: The finer detail about stimulus impulses that emerged could be folded back into the model of the neuron, making the entire simulation a bit more accurate and therefore a bit more useful, not just for the Urban team, but for anyone in the world working on those kinds of neurons. The lab work and the simulations worked symbiotically, each advancing the other.
Seeking the “Self” in Self-Control
Examples like that from computer-simulation work are unfolding everywhere across neuroscience. Todd Braver, Ph.D., associate professor of psychology at Washington University in St. Louis, is using simulations to probe the phenomenon of self-control. As Braver points out, the concept seems weirdly self-referential. Who is the “we” that is controlling “us”? What happens in our brain when we finally learn to refuse that second helping of dessert or to budget properly or to write thank-you notes? In a sense, Braver is looking for the physiologic underpinning of the kind of cognitive mastery we call “hard-won.”
Earlier research in the field found that when a person faces a conflict between impulses—for instance, in which the impulse to grab that slice of banana cake comes into conflict with the solemn vow to stay on a diet—a specific region of the brain (the anterior cingulate cortex, hereafter ACC) lights up. That research suggested that the ACC involves itself when two other regions are locked in a struggle over the control of behavior. The ACC attempts to resolve the conflict in line with the subject’s “higher” wishes—in the case of this example, to stick to the diet. (The ghost of Freud might claim the ACC as the seat of the superego.) Braver was interested in those cases in which the ACC initially fails, but then gets progressively better at asserting itself, until finally (to return to our example) the cake is refused.
One theory as to how this works is that the ACC learns by registering and remembering “failures” (where a failure means going off the diet), the conflicts associated with those failures, and the regions responsible for those conflicts. Whenever this organ detects a conflict, it looks at the history of the regions involved and recruits resources accordingly; the more failures, the harder it knows it has to work. Braver had another thought: Maybe the ACC can independently monitor the ambient environment of the regions and learns to associate the presence of specific stimuli with a high probability of conflict. On this alternate theory, when the ACC sees a cake come into the subject’s field of view, alarm bells go off. The theory is that over time the ACC’s grip on the events associated with and preceding the conflict improves. The more it improves, the earlier the ACC can mobilize, and the more effective it is. If a person cannot learn to resist when the cake is right in front of him, maybe he can learn to avoid the situations where cake is likely to present itself, perhaps by telling the waiter to be sure to keep the dessert tray on the other side of the room.
Braver’s thought sounds plausible, but it has a defect: There is zero neuroanatomical evidence of an actual connection between the ACC and the sensory regions of the brain. Although not fatal—the brain has lots of ways of moving signals around— this absence definitely weighs against the theory. If Braver had had to use in vitro methods to test his idea, he might well have passed, since such tests are so expensive and difficult that funders are likely to deem only the best-founded hypotheses worth the risk and investment.
Fortunately, there was a cheaper alternative. Braver and Joshua Brown, Ph.D., a post-doctoral student at Washington University, devised a grid of 30-odd interconnected model neurons to represent the ACC. They then found ways of forcing that model to process conflicting tasks under limitations that insured that sometimes the model would fail. Finally, they built two variants of the model, each representing one of the two competing theories. One variant could detect and keep track of failure histories, and the other could understand and react to (simulated) sensory cues that were strongly associated (but not perfectly so) with failure.
When they ran these models, they found that the cells in the variant keyed to sensory data became significantly more active than the cells running in the competing model, even in runs when there was no failure or even any conflict (essentially because the processing of the sensory data drove increases in the “strength” of the simulated synapses). The researchers then tested this prediction with a functional magnetic resonance imaging (fMRI) study in which human subjects performed the same tasks that had been presented to the models. The real ACCs showed roughly the same activity increases as did the programs simulating the sensory-cue theory. Braver observes that one lesson of this story is that the flexibility and economy of simulations make it easy to test competing hypotheses against each other. Another is that neuroanatomists now have a strong reason to double-check the apparent absence of a connection between the ACC and the outside world.
But Do We Really Want to Build Brains?
In these examples, Urban and Braver are using modeling as a cross-check on laboratory work, a cheap way of testing competing hypotheses against each other, and a source of provocative possibilities. This is pretty much how every other science uses the technique of simulation.
There is at least one way, however, in which the relation between neuroscience and simulations is unlike that in any other science, and that respect opens the door to a much more ambitious and conflicted application.
In no other science is the target of the simulation another simulation. But today’s reigning paradigm of brain nature, one in which virtually all of us are enmeshed, holds that the organ itself is a computational model, an information-based simulator, a processor of symbols. This means that once a computer model is running the same symbols, the same codes, that model will have captured everything, the entire phenomenon. The explanation and the thing being explained will have become logically interchangeable. A simulation of a river or a cloud or a stream of traffic always diverges from the physical reality being simulated, usually quite soon. A simulation of the brain, if entirely successful, might not.
We feel in our depths that a brain made out of salt, sugar, fat, and water must be fundamentally different than one composed of silicon, aluminum, copper, and gold. However, if you believe in the symbol-processing theory of brain operation, and, frankly, it is hard to even dream up a coherent alternative, then our intuitions must be misplaced. It must be possible, at least in theory, to have a complete model of a human brain (and therefore a human mind) running in a computer.
Urban and Braver’s research, with its hard, neuron-by-neuron slog through the details of individual circuits makes this day seem very far off, and, of course, it could be. But the profession might get lucky, if lucky is the word we want. Thirty years ago, the great neuroscientist Vernon Mountcastle, Ph.D., now University Professor of Neuroscience Emeritus at the Zanvyl Krieger Mind/Brain Institute of The Johns Hopkins University, suggested that all the functions of the cortex —thought, perception, memory—as different as they seem to us, are really just variations on a single operational theme. Everywhere the cortex is doing the same thing. “All” that remained was to figure out what that was.
Mountcastle’s proposal helped to explain the functional flexibility of the cortex, in which the same region can be taught to handle the inputs of several different senses, depending on need, and its visual uniformity, which makes it look as if it were doing the same thing everywhere. (The degree to which the idea of a common unit of opera-tion can be extended to the non-cortical brain, with its distinct evolutionary history, is an open question.) It also made the brain pleasingly compatible with the rest of nature. Modularity is just what you would expect to find in an evolved organ. Nature is immensely fond of designing a single common unit and reusing it with simple variations. It never reinvents the wheel if it can shrink or expand or add a little color to one already in inventory. Heredity, for instance, rests on basically a single molecule (with variations); the universe of proteins is made by shuffling and reshuffling a few amino acids; animal cells are much alike; and the communications of cells and neurons are likewise stereotyped. Evolution just does not have time to indulge itself with complications. Usually, if it cannot find a simple way to get somewhere, it does not go there.
Ever since Mountcastle published his conjecture, efforts have been made to prove or disprove it. One conceptual approach is to try to match some candidate core function with an anatomical feature that seems to repeat throughout the cortex. Examples of such functions might be pattern matching (identifying similarities and differences) or predicting. It is provoking to reflect that while neuroscience has been struggling to find a core function for the cortex, artificial intelligence has been trying equally hard, and failing equally egregiously, to find an algorithm for pattern matching that would enable computers to see as well as pigeons, or navigate obstacles as well as rats, or hear half as well as bats. Perhaps some day the same bright idea will light up both fields.
Recently, Switzerland’s Ecole Polytechnique Federale de Lausanne struck a deal with International Business Machines (IBM) to use its supercomputing platform “Blue Gene” to model cortical columns in the brain. Cortical columns are defined by the observation of relatively greater densities of vertical and horizontal connections among neurons in layers of the cortex. Approximately a million such columns might be found in the human cortex, each comprising 50,000 neurons; the Lausanne researchers plan to start by modeling rat columns, which are much smaller. Although the function of the cortical column is not known, the hope is that building a model and then exposing it to a realistic environment will instruct scientists on this point.
If a common unit is found, in Lausanne or elsewhere, the information-filtering task will shrink to finding out how just one of those units works and the vocabulary of variations relevant to those units. You might only need a few hundred graduate students for that.
If We Do Get Lucky—What Then?
Computer simulation of the brain might get another kind of break. Science is possible at all because nature has the mercy to mix phases (or levels) of seeming order with phases of chaos, depending on the particular scale (whether time, energy, or space) at which we examine a phenomenon. For example, as you move up the scale of size while looking at gases, the chaos of Brownian motion of swirling individual molecules gives way to a level of organized behaviors that is called temperature and pressure. These different levels have their own logic and laws, and these laws can be used to calculate outcomes without worrying about what is happening farther down the scale. A person (or a simulation) can calculate pressure just by knowing the temperature and volume of a gas (unless you want to get incredibly precise results), without thinking at all about the individual molecules. A pool player can calculate the mechanical interaction of the balls on the table without knowing anything about the physics of their atomic or molecular structure. Each of these levels of order is sealed off—encapsulated—from the complexity underneath, although that complexity ultimately dictates the logic of that higher level.
Biological evolution makes something of a specialty in both finding and creating these realms of emergent order. When the pancreas wants to tell the liver to store more glucose, all the pancreas has to do is squirt a little insulin into the blood. It does not need to know anything about how the liver works. The communication is on the level of the organ, not the level of the cell, let alone the molecule. By analogy, if there are levels of emergent order in the brain— and there almost certainly are—then we would be able to simulate brain activity without worrying about neurons or synapses or perhaps even cortical columns. These order hierarchies might be anywhere in the brain, even quite high up, which means that if we only knew enough we might be able to model a mind with the technology that is in the stores today. How soon we will know enough is up to the luck of the laboratory. We might get there in 10 years, or we might need to beaver on for another century.
In other words, the brain might not be anywhere near as complicated as we think it is. All we know for sure is that we are confused, and there is more than one explanation for our confusion. Seventy years ago, protein chemists were convinced the protein was the unit of inheritance and they wandered around in circles a lot, too. Then, one day, Watson and Crick turned on the lights and molecular genetics took off. There is a popular joke among neuroscientists: “It’s a good thing the brain is so complicated, otherwise we would never figure it out.” Perhaps the joke is on us. Maybe the brain is not really that complicated, and that is why we have not figured it out. But surely some day we will.
When and if large-scale brain simulations become possible, they will raise quite intricate ethical questions. One use that is sometimes mentioned for full-brain simulation is as a test-bed for brain treatments. Suppose you found a way to simulate neural growth in ways that repaired memory deterioration or just improved memory in lab animals. The animals seem to tolerate the intervention just fine. In theory, you should now be able to take your treatment into human testing, but, in reality, no human subjects or bioethics committee (and especially not the Food and Drug Administration) would ever approve experiments that introduced changes in a human brain without a great deal of data about long-term effects, data that cannot be collected until such experiments, or experiments that raise the same issues, are approved. This Catch 22 seems to stand between a very wide range of brain disorders and dysfunctions, from sensory loss to cognitive deterioration, from dementia to autism, and any hope of a cure—not to mention the even longer list of possible brain enhancements.
In theory, a fully functioning, validated brain model could be used to advance the testing process, putting off for as long as possible the need to use real brains. But if the model runs the same logical processes as real brains do, don’t you run into the same problems? And, if you modify the model to avoid this problem by making it less “real,” doesn’t this reduce its usefulness? Maybe not, but how can you ever know?
An even more difficult question will be the nature of our relations to these models, these entities. By definition, the behavioral output of a successful simulation will respond to (simulated) situations the way real humans would. The simulated entities would be defensive, sympathetic, whimsical, impatient, interested, and generous (or not). If they act just like humans, asking us plaintively where they came from, developing interests and pursuing them, and the like, then it would seem they ought to be treated as human. Among other implications, we would become as reluctant to experiment on the models as we would on real brains. In every other science, the more accurate and comprehensive simulations get, the better. Here, the more progress neuroscience makes, the closer it gets to shutting a door on itself.
On the other hand, simulations by their nature are copyable entities. Push a button—or maybe the simulations would push the button themselves—and you can get a million of them. That fact constitutes what seems like an uncrossable barrier to treating these entities as human, no matter how much like us they might be in other respects. Given infinite reproducibility, there are certain kinds of relations we will not be able to enjoy with our models, no matter how human they seem. Giving them the right to vote is the least of it.
From this distance, these questions seem uniquely difficult. It is hard to conceptualize even candidate answers to them, but even harder to see why we will not have to answer them.