Neuroethics Q&A: Hank Greely Delves Into Forensic Neuroscience

by Aalok Mehta

September 29, 2008

Ahead of the inaugural annual meeting of the Neuroethics Society, on Nov. 13 and 14 in Washington, D.C., Dana Press writer Aalok Mehta quizzed some of the experts in the field about the implications of neuroscience and its relevance to everyday life. This week: Henry "Hank" Greely, a professor of law at Stanford University, who has been studying bioethical issues for more than a dozen years. During the conference, he will speak on forensic neuroscience, including controversial new lie-detection technologies and how neuroscience may change the treatment of criminal behavior.

You have worked extensively on the possibilities of neuroscience-based lie detection based on such technologies as functional magnetic resonance imaging (fMRI) and electroencephalograms (EEG) and its potential uses in judicial systems and military applications. How likely are these scenarios, and what sort of ethical considerations do they imply?


Portrait picture of Stanford law professor Hank Greely - Thumbnail
Henry "Hank" Greely

There are already three companies selling fMRI–based lie detection services in the United States, plus at least one selling an EEG-based lie detection service, so the scenarios aren’t hypothetical, they’re here. The problem is we have no good proof and no strong reason to believe those lie-detection services are actually very reliable. But there is almost no regulation of lie detection in the United States, so anybody can come up with anything they want and start selling the service.

And it’s not just an American issue. The New York Times had a piece last month (Sept. 15) about a woman who was convicted of murder in India based largely on a polygraph test and on a new, unproven EEG-based lie-detection test. She’s serving life in prison as a result.

If these technologies do turn out to work reasonably effectively, then we get into another set of questions. Who do we want to use them, under what circumstances, under what kind of controls? That will involve constitutional questions certainly, of the Fifth Amendment and the right against self-incrimination; the Fourth Amendment, search and seizure rights; the Sixth Amendment, right to put on a defense in a criminal case; the Seventh Amendment, right to a jury trial; and maybe the First Amendment.

There are some circumstances in which we would want it to be used—presumably, for example, where a criminal defendant volunteered to take the test to prove he is innocent. There are some places where we wouldn’t want it to be used, such as when the employer asks you about your Internet usage during working hours. And there are some where the issues are going to be hard and tricky. Do we let the military use it on captured soldiers? Do we let them use it on their own recruits? Do we let parents use it on their children? Just particularly teenage children?

All of this, of course, is contingent—these questions only arise if the technology works. And they arise to a greater or lesser extent depending on not so much the underlying science but the technical embodiment of the science. So if it takes an hour in an MRI, parents are unlikely to use this on their children. If some easier, cheaper, more portable device is created, then the possible uses expand exponentially.

The Indian decision was controversial. What has the general reaction to that been?

Well, the people I know in the field tend to be skeptical, as I think all scientists and lawyers should be. I think this is crazy—the idea that you’re relying on this test to put people in prison is reckless at best. The test works on a fundamental paradigm that has some plausibility, sort of a recognition signal, the p300 wave that often rises when someone sees something he recognizes.

But there’s been a lot of work on that in the past 20 years in the United States. Peter Rosenfeld at Northwestern is probably the most responsible researcher doing p300 work, and I think he was quoted in the Times article as thinking the Indian work is reckless, baseless, groundless, bad, because there is no peer-reviewed literature backing it up. There are no trials we know of backing it up, no regulatory structure backing it up. We basically have the inventor’s word this works.

This particular approach is a little more suspect in the United States, because there is an American who has been pushing something quite similar for more than a decade that he calls brain fingerprinting, also based fundamentally on this p300 response, and he also does not publish in the peer-reviewed literature and is considered by many to be a huckster. So this particular EEG-based approach—using, among other things, the p300 wave—has already engendered suspicion among researchers in the United States, and its application in India is equally questionable.

I’ve heard that, at least in the case of fMRI, the technology is better suited to give evidence for innocence rather than guilt.

I wouldn’t agree with that, with one exception, that it is better for innocence than guilt. I don’t know how good it is for either. Fundamentally, if we don’t know whether this works, we shouldn’t use it for either innocence or guilt.

The one way in which it is skewed a little toward innocence is that unless you’re willing to take extreme measures, you need the subject’s cooperation in the MRI, because if the subject moves around in the MRI, or even moves his tongue around in his mouth in the MRI, he’ll mess up the results. In a way that’s visible—you’ll know he’s messed up the results but you still won’t have any usable results.

Why has it been so difficult to come up with an accepted method of testing these neuroscience-based lie detection technologies out?

First of all, I don’t think anyone has tried very hard. It’s not in the interests of the people pushing the tests to have a really rigorous screening assay because their tests will flunk. And the government doesn’t have a regulatory structure that requires it. If this were a new drug, the Food and Drug Administration would require two sets of randomized, controlled trials. There is no such requirement for any sort of lie detection.

The other answer is, to be fair, it is pretty hard to come up with such a paradigm. I don’t believe there has been a single lie-detection paradigm that has been replicated in a second laboratory; most of the results are based on studying the usual subjects, undergraduate psychology majors who get roped into being lab animals for a little extra credit.

When you’ve got an undergraduate being told to lie about whether he sees the test bits or not, it’s hard to know how much that bears on, say, a criminal saying, “No, I didn’t try to buy cocaine from that officer.” The contexts are very different between when someone is told to lie and has really no strong interest in getting away with it and when somebody makes up their own lie because they are trying to save their own skin, with all of the stress and anxiety that might go along with that.

To do the second kind of experiment raises some serious ethical issues about research. You can’t very well take undergraduates, randomly arrest them, accuse them of something you think they’re guilty of, scare them into thinking they’re actually going to be imprisoned and wait to see whether you can detect the lies they can tell. So there are some genuine difficulties in coming up with an experimental paradigm that both accurately reflects the real-world uses of lie detection and that can ethically be done.

More information:

The Neuroethics Society 

Dana section on Neuroethics 

Q&A with Steven Hyman: What is Neuroethics?

Q&A with Judy Illes: Incidental Findings 

Q&A with Martha Farah: The Business of Neurotech