Fact Sheet
Neuroethics: A Focus on Neuroscience and Society
Who this is for:
The more we learn about how to change our brains and those of others, the more we need to think about what those changes mean to us, to the people we interact with, and to the larger communities we live in. Rather than wait until a new drug or device changes everything, we should consider in advance what the effects of such a drug or device might be, and how we think and feel about that. Many of the questions we should ask fall under the umbrella of neuroethics.
What is Neuroethics?
Neuroethics asks two main types of questions: (1) How do scientists conduct neuroscience research in a responsible and ethical way, making sure we respect research participants and their contributions while producing research advances that improve the well-being of their users? And (2) As we learn more about our brains—and, consequently, ourselves—how does that change the way we think about what it means to be human, and whether that means we should change the systems (legal, political, cultural) we live in? The first question is often referred to as “the ethics of neuroscience”; the second, “the neuroscience of ethics.”
The field of neuroethics—like its definition—is a work in progress. This is the perfect time to talk with one another about what we want to do for, and with, our brains and how we can practice brain science in a way that creates a better future for everyone. Part of the discussion is to decide together what values should guide us in the face of these discoveries and new technologies: What should our collective future look like?
For example, how do you feel about the idea of someone “reading” your mood or your thoughts? Do you feel cozy and connected, worried about being judged, curious about how it affects the moods and thoughts of others, or something else?
What scientists and researchers who study the brain and behavior are learning could be translated into positive change for societies and for individual people, but it could also be harmful. Who would make these decisions? Who would enforce them? Will the answers differ depending on where in the world a person lives? How can we ensure our voices are heard?
Here are a few current areas in neuroethics that may be important to discuss:
Considering Brain Imaging
Today, brain imaging techniques such as functional magnetic resonance imaging (fMRI) can help scientists measure brain activity in a general way and learn more about how different brain regions are implicated in different mental tasks. Some scientists and engineers think brain scans could someday be used to better understand emotions and desires—or even detect lies. As brain scanning techniques become more advanced, marketing agencies may want to use them to see how you really feel about a political candidate, or to test out different ways to influence your shopping decisions. Crime investigators may want to use them to check your alibi or even test your potential to do harm. Although we are still years away from technology being used in these ways, everyone in a community needs a voice when determing how and where, exactly, brain scans should be used.
Considering Education
As we discover more about what is happening in the brain as we learn, we can use those discoveries to improve classroom instruction and other learning activities. For example, a deeper understanding of the biology underlying the brain’s memory or motivation systems could help shape training for teachers, or specific assignments for students. It also could influence the development of new educational policies that might unwittingly help some students at the expense of others.
For example, if neuropsychological or neurophysiological data could predict performance, separating high-performing students from lower-performing students, would schools use those scans to select high-scoring students for more advanced enrichment programs, leaving other students who might not have developed as quickly—but still have the innate ability—with less? As we learn more about how the brain develops and what neural processes underlie learning, how do we ensure that our knowledge is being used to help the majority, rather than a rare few? To reduce social inequities rather than reinforce them?
Considering Neuroenhancement
Scientists are hard at work developing drugs, brain-machine interfaces, and brain implants to help people cope with specific medical conditions. But these treatments could also give a boost to people who don’t have attention-deficit hyperactivity disorder, depression, or a missing limb. As these brain-modifying devices and drugs become more commonplace, there may be opportunities for people without medical conditions to use them to help focus on final exams, cheer up after a bad break-up, or enhance their athletic performance. How do we decide who gets access to neuroenhancements? Should it be limited to those who can afford them, to those who have a medically defined need for them, or in some other way? Most important: Who will decide who has access and why?
Considering Biological Models
Many scientists are using induced pluripotent stem cells (iPSC), generated from a person’s blood or skin sample, to help them study neural function and disease. Some rely on new research models such as brain organoids—tiny clumps of neural tissue that can grow in a lab dish—or transplants, putting human neural cells into the brains of animals (see Understanding New Brain Research Models). There are also chimera models, where researchers transplant stem cells into an animal embryo in order to gain new insights into brain development. Although the use of these more advanced biological models has many overlapping ethical issues with other areas of science and technology, it also raises questions specific to neuroethics.
For example, did the people who volunteered to donate the skin cells used to make iPSCs know they would be used to make brain tissues? Would they consider that different from using the cells to make liver or muscle tissue? Also, what happens if chimeras (animals that have had human brain cells transplanted into them) gain new intellectual abilities such as enhanced problem-solving capabilities or language? What if such models develop painful symptoms of the diseases that scientists are trying to understand? What if organoid models develop a human-like consciousness? As these models become more sophisticated—or, are used to study certain diseases—what does it mean to properly care for a transplant or chimera model?
While such questions may seem far-fetched given the limitations of today’s biological models, as research models become more sophisticated, they may not be. It’s critical that scientists, doctors, policy makers, lawyers, ethicists—all of us—consider a wide range of potential moral, ethical, and regulatory issues now in order to put appropriate guard rails in place as such models continue to grow in complexity.
Considering Artificial Intelligence (AI)
AI, a field influenced by computer science that focuses on developing intelligent machines, has made great strides in the past few years. Historically, computer scientists were inspired by brain science as they developed AI algorithms—and used computational units designed to act like neurons as they sought to build machines with sensing, learning, and communication capabilities.
Modern neuroscientists now leverage AI, including machine learning (ML) algorithms, to help develop models to understand specific cognitive functions. These algorithms are also being used to predict or diagnose brain-based diseases, as well as to identify and test potential drug therapies. There is a growing field of neuroprosthetics, or devices that are at least partially controlled by the brain, which rely on AI and other algorithms to decode the brain’s signals to guide user control. And those are just a few of AI’s current uses. There’s no doubt that as these algorithms become more sophisticated, scientists will find other potential applications as they continue to explore the brain and its functions.
Advances in AI/ML bring great hope that clinicians can develop and customize therapies to address neurodegenerative and neurological diseases. Basic scientists also hope such tools can help them better understand the brain’s inner workings. But the usage of AI in both the clinic and the lab raises a host of important neuroethical issues that must be addressed. For example, could programmed neural implants meant to treat disease override free will? As new algorithms become better at diagnosing brain-based diseases, how will this influence the field of neurology or psychiatry—as well as the doctor/patient relationship? Once again, it’s imperative that ethicists, scientists, clinicians, and patients talk about these issues now—to help guide the way AI is used in the future.
Considering Research Participation
Today, many people with brain-based diseases agree to participate in clinical research studies to help determine whether a new drug or device is safe and effective. But many of those same people may find themselves unable to gain access to the product once it is approved for use, due to price, location, or other issues. For example, when people with life-threatening depression volunteer to join deep-brain stimulation research projects, they allow a surgeon to open their skull and insert wires and a device that uses electric stimulation. With the device turned on, some of the volunteers find they can better manage their mood and return to daily living.
But what happens after the project ends? Should the device be turned off even though it’s helping? Who should pay the high costs of maintaining it (including replacing old batteries)? If medical insurance doesn’t cover experimental devices, should it? While there have long been ethical concerns about ensuring that participants truly understand the risks involved with research studies, some are now arguing that participants should also be given continued access to any treatment or intervention that they helped to develop.
*Updated March 2024