Share This Page

A Fish Story? Brain Maps, Lie Detection, and Personhood
Generations of brain-imaging studies have provided increasingly detailed information about the complexity of human behavior, but few lines of investigation better illustrate the intricacy of the brain’s workings than the neural processes involved in lying or deceiving. And perhaps none make clearer the difficulty of accurately distinguishing between truthfulness and untruthfulness with new imaging technology. Moreover, we must ask ourselves if we are laying a foundation of risk regarding the very idea of creating brain maps of behavior and personal identity and whether these maps are ready for such real-world applications as law, employment, and insurance. When technology of this kind moves out of the hands of researchers and becomes available for practical uses, the lives of individuals and the future of our society may be profoundly affected.

While most of us hold truthfulness in the highest esteem, it is safe to say that all of us have at one time or another engaged in less-than-forthright behavior that could be categorized as deceiving or lying. A deception, the potentially more benign of the two, relies on misleading information about an event that is conveyed by the omission or distortion of one or more of its components. A lie, by contrast, tends to have frank, whole misinformation at its core. While these broad categories are a useful beginning in trying to understand how we evade the truth—and why—they are blurred by subtle intricacies and not-so-subtle differences across people and perspectives, not only in the definitions of deception and lying but in their moral acceptability.
The margins of egregiousness are sometimes quite clear: A very dark deception or lie, such as of that of a murderer denying guilt or a “witness” falsely accusing another of a crime, is clearly distinguishable from the white lie of the fisherman about “the one that got away.” Other falsehoods, however, tend to lie in-between on the moral spectrum and are likely to be subject to greater debate. Consider the thief who steals a loaf of bread (and denies it) to feed his hungry family, or the homeless person who feigns an illness to obtain a warm meal and night’s sleep in a safe hospital bed. What about the elderly person who witnesses a mugging and whose otherwise benign but increasingly frequent (and undisclosed) confabulation puts an innocent person in jail? Or the physician who lies for the patient’s good? The excusability of such falsehoods under certain circumstances tends to obscure their distinctiveness, and, indeed, we barely pause to reflect on their morality. But all behaviors have some kind of neural representation; various deceptions, and perhaps even their moral natures, are turning out to be distinctive in the brain.
The interactions of closely associated memory systems are a key to understanding the biological complexities involved in formulating and executing behaviors intended to mislead. In studying the neural systems we use to lie or deceive, psychology and cognitive-neuroscience researchers have paid much attention for the past decade to identifying those components of the brain that enable the explicit, conscious, and voluntary recollection of prior experiences. Clinical lesion and neuroimaging studies with positron-emission tomography (PET) or functional magnetic resonance imaging (fMRI) consistently show preferential engagement of the anterior and inferior prefrontal regions of the cerebrum for intentional retrieval of information. The medial temporal region and the hippocampus are also involved in successful conscious recollection. For implicit memory of past experience, which has an unconscious impact on future behavior and performance, only a subset of these regions is engaged. Moreover, convincing evidence exists for a “cross talk” effect, which causes implicit memory to interfere with the formation of new, conscious memories. As a consequence, the impact of old stimuli on new experiences may not be entirely unpredictable, even though we do not have full control of it— all findings that may conceivably be exploited by new lie-detection technology.
Signs of Lies and Deceptions
Electroencephalography (EEG) studies have relied on brain-wave signals called “eventrelated potentials” (ERPs) to measure truthfulness. In these experiments, volunteers view individual words or pictures on a video screen or hear them through headphones. The stimuli—descriptions of antisocial acts or personality traits, for example—are presented every second or so. The volunteers respond to each with a button press that signifies whether or not they have not committed the act described or whether they agree or disagree that the traits referred to describe them. In various studies of this kind, the amplitude of electrical response typically peaks 300 to 500 milliseconds after stimulus presentation, with differences depending on whether volunteers revealed or concealed truth about their own behaviors, lied about the accuracy of descriptors regarding their own personalities, and so forth.
Drawing on these and other results of “guilty-knowledge” EEG experiments in the early 1990s, technology researchers Emanuel Donchin and Lawrence Farwell have promoted “brain fingerprinting” as a tool for determining whether an individual is in possession of certain knowledge of events, particularly when that knowledge relates to crime. Brain fingerprinting is a step beyond finding response peaks, as in the event-related potentials studies discussed above, relying on a wider range of brain waves emitted in response to relevant and rapidly presented stimuli. When the brain recognizes such stimuli to be consistent with significant information—such as crime-scene details—it responds with a “MERMER” (a “memory and encoding-related multifaceted electroencephalographic response”) that can be measured at the scalp. Unlike polygraph testing, which measures an individual’s fear of getting caught in a lie by tracking physiological markers, brain fingerprinting is purported to measure brain waves that are involuntarily emitted when information stored in the brain is recognized.
Brain fingerprinting played a significant role in the case of Terry Harrington, whose murder conviction was reversed and a new trial ordered after he spent twenty-two years in prison (State of Iowa v. Terry Harrington). The case dates from 1977, when Harrington, then 17 years old, was convicted of murdering a retired police officer. In 2000, Harrington underwent brain fingerprinting; his brain did not emit the expected EEG patterns in response to critical details of the murder. Examiners interpreted the results to suggest that Harrington was not present at the murder site, a conclusion corroborated by the fact that his brain did emit the requisite patterns in response to details of his alibi. When confronted with the brain-fingerprinting evidence, the key prosecution witness recanted his testimony, providing successful grounds for reversal.
Farwell and others acknowledge the limitations of brain fingerprinting, however. The most important limitation, as Jennifer Kulynych has argued, is that neither this technique nor others, whether legitimate or controversial, can establish moral culpability nor, as the National Academy of Sciences concluded, individual guilt. Nor can it reveal such specifics as where, when, and how a crime occurred. In addition, no lie-detection technique, including the polygraph, has met standards for admissibility under the two Supreme Court rulings that set the standards for reliable scientific evidence (Frye v. the United States [1923] and Daubert v. Merrell Dow Pharmaceutical, Inc. [1993]).
Mind Reading Tomorrow?
Even with an apparent legal coup in its history, brain fingerprinting cannot be considered mind-reading technology. Functional magnetic resonance imaging (fMRI), however, may ultimately be a form of such technology. In studying mental behavior, fMRI investigators make measurements of changes in oxygenated blood flow in response to stimulus-specific events. Images are produced by subtracting flow data acquired in baseline trials (under conditions such as resting), and then processed according to predetermined statistical thresholds. Some foresee use of this technique as a modern form of polygraphy that may one day transform strategies for verifying truth and detecting lies. At the time of this writing, already four fMRI studies specific to lying and detection have appeared in the peer-reviewed literature.
First, Daniel Langleben and his group used fMRI to study neural patterns associated with behavior that they structured to be deceptive. In their experiment, volunteers were instructed to confirm or deny, either truthfully or falsely, having a certain playing card in their possession. When subjects gave truthful answers, the fMRI scan showed increased activity in visual and motor cortices. When they were deliberately deceptive, additional activation appeared, including in such areas as the anterior cingulate cortex, which other research has found is involved in self-monitoring and attention. These results are consistent with activity measured at other critical neural-circuit nodes in research studies involving healthy people’s conscious emotions and feelings and in clinical studies of pathological lying. They cause us to wonder if we might in the future be able to discern not only whether an individual is being deliberately deceptive, but whether the deception was premeditated or not.
In the second study, Sean Spence and his colleagues hypothesized that lying could be distinguished from truth-telling by looking at differences in activations in dorsolateral and ventrolateral prefrontal regions often identified with conscious restraint and self-awareness. Subjects in this study gave yes or no answers about their activities of the day (such as whether or not they made their bed); in an interesting twist, to ensure that subjects attended to the task and lied earnestly, they were falsely advised that an investigator would later judge their responses. In addition to increased reaction times for lying, the fMRI measurements associated these responses with greater ventrolateral prefrontal and anterior cingulate activations.
In the third study, Tatia Lee and colleagues focused on malingering as a form of lying. In an effort to simulate malingering as “an intentionally false and fraudulent evocation of a physical or mental illness,” they trained volunteers to feign memory impairment. In one paradigm, the research participants were asked to recall and respond —truthfully or falsely—to target stimuli. In another, they responded to autobiographical information. The investigators reported that when memory impairment was feigned, areas of activation common to both tasks were bilateral prefrontal areas, temporal and subcortical (caudate) regions, and the left posterior cingulate. Reports of the far frontal activations are especially consistent with others involving goal-directed behavior, anticipation, cognitive control, and self-regulation.
Finally, Giorgio Ganis and colleagues recently demonstrated high activity in the right anterior frontal cortices when well-rehearsed lies (which they classified as a form of deception) fit into a coherent story. In their paradigm, in which subjects recalled real and fabricated scenarios about life events such as vacation and work, they reported distinct activations for lying versus telling the truth in the anterior prefrontal cortices and para-hippocampal gyrus, the right precuneus, and the left cerebellum.
Related to these studies are classic trials in which activation patterns have distinguished between false and truthful memory. Experiments have shown how well information will later be recalled on the basis of activation patterns and, in this past year alone, have produced convincing maps of neural activation associated with the active suppression of past events.
Lie Detection, Mind-Reading, or Neo-Phrenology?
Taken as a whole, the above-discussed research suggests that a brain map may be an appealing tool for distinguishing truth from lies and deceptions and truthful accusations from false ones. A great deal of anticipation thus surrounds the emergence of new functional imaging technologies for practical applications ranging from criminal investigation to employment screening. But significant conceptual, technical, and individual obstacles indicate that it will be a long while before such neurotechnology is ready for prime time.
Conceptual Issues
The cited fMRI studies explored different kinds of lying and deceit. Other lines of inquiry found new information about innocently false or unreliable memory. Together, the research suggests that significant conceptual problems remain, given the sheer complexity of trying to detect any type of non-truthful behavior. All such behaviors inevitably involve, albeit at different times and in varying degrees, memory, intention, planning, manipulation, and execution, and all are inextricably related to human consciousness and language. Moreover, lying and deceit are practical manifestations of humans’ ability to make inferences about the mental states of others, because to construct a lie, one has to measure the credulousness (or gullibility) of the person or group being lied to. How might neural signatures shed light on one’s ability to infer the recipient’s mental state? How might they differ between good liars and bad liars? Without the capability to characterize and distinguish such signals, detecting neural signs of deceit will remain only a somewhat more sophisticated form of polygraphy. So how do we get out of this conceptual bind? We would need mountains of rigorous studies to create brain maps that reliably distinguish between lying or deception, infidelity or betrayal, love or loyalty, attention, intention, and even verbal working memory. Even then, we would still need information about motivation and free will, for which we need facts, not brain maps. Perhaps our neuroimaging efforts may be more practically fruitful if they were aimed at the admittedly duller, but potentially simpler, target of detecting not lies, but truth. A neural seat for truthfulness might also be elusive, but the parts of the brain that collaborate in telling the truth may possibly be easier to reach and interpret.
Technical Issues
Because fMRI now surpasses other neuroimaging techniques in elucidating human behaviors that have social relevance in everyday life, it has been the buzz of the past decade. In my own lab, we have shown an increase in numbers of publications by an average of 56 percent per year, and significant growth in studies involving social attitudes and moral behaviors. Still, many researchers—hard-core engineers and medical-imaging physicists among them— doubt that the technology will ever become sufficiently inexpensive, or small and easy enough to use outside the laboratory. The challenges of translating laboratory applications to forensic ones will have to be resolved for the endless stream of frontier brain-imaging capabilities—for example, technology such as optical imaging, which uses infrared light to capture regional changes in blood concentrations and metabolic activity in a noninvasive way. Also, paradigms are needed to ensure appropriate resource allocation, technically and culturally reliable measurements, proper handling of unexpected medical findings, and the mitigation of harm that may arise from false-positive results.
Issues of Personhood
Recent movies, such as Paycheck and Eternal Sunshine of the Spotless Mind, have focused considerable public attention on the way advanced neurotechnologies may be used to explore and modulate the human mind. In Eternal Sunshine, for example, the protagonist seeks to have memories of a tumultuous relationship erased. The procedure, performed in the comfort of his home, is intended to serially obliterate specific functional loci (“spots”) of his brain that register in his mental images. But as the protagonist realizes that elements of his very personhood—and not merely individual memories—are at stake, he begins to vehemently resist the process. Generations of neuroscience research would in fact confirm that the intimate weaving of memories with an endless array of cognitive experiences complicate such erasures. Even in limiting discussion to the cognitive complexity of lying and deception and the search for their neural signatures, we may safely say that our whole is far more than the sum of its spots.
Still in the Lab, but for How Long?
For the most part, the ability to distinguish, detect, and illustrate lies and deceptions on brain maps still appropriately resides today in the laboratory and in the realms of entertainment and science fiction. While we can reasonably expect future technology to enable real-world applications, there is a long way to go before we can reliably tease out target behaviors from all that makes us up as complex cognitive beings and before we can ensure the legitimate use of that information.
Yet already we are witnesses to functional imaging studies of “neurostrategies” for product preference and, in 2004, another (unpublished) study seeking to distinguish brain responses based on political disposition. Such studies are seductive, but I urge caution. The tremendous progress in neuroscience research to date drives the continued innovation that propels knowledge forward. At the same time, however, our new field of neuroethics has brought to the foreground the need for critical thinking associated with this unprecedented potential to delve into the privacy of our thoughts and to probe who we are. Troubling concerns will arise, for example, about whether real world applications will be well-intentioned, mischievous, or produce problems that overshadow the wealth of knowledge to be gained. We need to take these challenges seriously today, and tackle them preemptively before they become urgent social problems tomorrow that elude straightforward solutions.
Acknowledgments: Neuroethics Imaging Group. I am indebted to HFM Van der Loos and Hank Greely for their valuable feedback.
Sidebar: On Lying and Liars
References
- Committee to Review the Scientific Evidence on the Polygraph (2003). The polygraph and lie detection. Washington, DC. National Academy Press, 2003.
- Farwell LA, Smith SS. “Using brain MERMER testing to detect concealed knowledge despite efforts to conceal.” Journal of Forensic Sciences 2001, 46(1): 1–9.
- Ganis G, Kosslyn, SM, Stose, S, et al. “Neural correlates of different types of deception: An fMR Iinvestigation.” Cerebral Cortex 2003, 13: 830–836.
- Illes, J, Kirschen, MP, Gabrieli, JDE. “From neuroimaging to neuroethics.” Nature Neuroscience 2003, 6(3): 250.
- Kulynych, J. “Psychiatric neuroimaging evidence: A high-tech crystal ball?” Stanford Law Review 1997, 49: 1249–1270.
- Langleben, DD, Schroeder, L, Maldjian, J., et al. “Brain activity during simulated deception: An event-related functional magnetic resonance study.” NeuroImage 2002, 15(3): 727–732.
- Lee, TMC, Liu, HL, Tan, LH. “Lie detection by functional magnetic resonance imaging.” Human Brain Mapping 2002, 15: 157–164.
- Spence, SA, Farrow, T, Herford, A, et al. “A preliminary description of behavioural and functional anatomical correlates of lying.” NeuroImage 2001, 13(6):477.
- Tierney, J. “Using M.R.I.s to see politics on the brain,” New York Times, April 16, 2004.