Share This Page
Big Data, Big Concerns
Neuroethics Viewpoint
An article in the Winter issue of Cerebrum magazine and a podcast episode with the author laid out a tantalizing vision of the enormous potential of advances in neuroimaging and so-called Big Data technologies to revolutionize the treatment of neurological disease. But the author made only a fleeting mention of the ethical issues raised by these advances.
Fortunately, the same author—Vince Calhoun, director of the Center for Translational Research in Neuroimaging and Data Science—co-authored a recent article with two experts from the Netherlands that did explore in-depth the major ethical issues raised by these endeavors. It is a welcome effort to flag potential ethical problems while the field is still in an early stage of development.
Calhoun’s Center is well-situated to conduct this work. It is backed by three universities in Atlanta, GA, with complementary strengths and missions. They include Emory University, which has expertise in brain disorders, Georgia Tech, which is strong in data mining, and Georgia State, which is proficient in neuroscience and psychology.
Calhoun’s Cerebrum article lays out the promise, achievements, and disappointments of the field so far. On the plus side, the knowledge gained from big data and neuroimaging have provided new insights into the working of the brain. But hopes that the discovery of functional magnetic resonance imaging would lead to a clinical breakthrough in assessing and treating mental illness have not yet materialized. Indeed, Calhoun can’t point to any specific examples where neuroimaging is beginning to help the mentally ill. Progress has been slower than he would have liked.
His hope is that within ten years we may have learned enough to update our psychiatric diagnostic criteria and refine the medications we prescribe to treat some mental health disorders.
The research has been slow to reach the scale required. Calhoun traced the evolution of the field through eras in which researchers studied small numbers of subjects, typically 5 to 20, then larger groups comprised of hundreds of individuals, then the interactions between networks in the brain both at rest and while performing tasks. Each era added to the knowledge base, but they have not yet led to clinical tools to treat mental health disorders or determine drug delivery strategies.
We are now firmly in, what he calls, “the era of big data for neuroimaging and psychiatry.” Several studies already scan tens of thousands of individuals over time, and powerful “deep learning models” require lots of data and computer power. Previous studies have focused on group results and averages. The current goal is to make predictions for individuals of how their symptoms will progress and how they will respond to medications. Calhoun finds “considerable reason to be optimistic about the not-so-distant future.”
The article co-authored by Calhoun that explored the ethical issues was published in the journal Human Brain Mapping last July. It analyzed differing approaches in the European Union and the United States toward the use and dissemination of personal health data.
Probably, the most important distinction concerns who should be considered to “own” the data and thus have the major say in how it is handled and disseminated. Researchers and universities often believe that the data “belong” to them, and funding agencies in this country consider institutions the owners of the data. In some cases, the funding agencies dictate that the data be shared.
By contrast, recent laws in Europe give more rights to the individuals who participate in studies to determine the extent in which they want their data shared. That puts a greater burden on researchers to protect participants’ privacy and obtain their permission before disseminating personal health data.
Depending on the circumstances, research journals may also demand that data on which an article is based be uploaded at the time of publication, making them the effective owner of that data.
The chief risk in sharing data is that, if it escapes from the research realm or falls into the wrong hands, it can harm the individual whose data has been shared. For example, some studies collect information about substance use and abuse, diseases such as HIV/AIDS, or procedures such as gender reassignment surgery that can stigmatize an individual in some circles.
There are ways to protect the privacy of an individual’s health data without unduly hampering research. The trick is to strike an appropriate balance between risk and benefit.
One approach is to “de-identify” data that directly defines an individual, such as name, address and date of birth, as well as information on an individual’s physical and mental health or treatments. All such information is stripped from the dataset and replaced by artificial identifiers that can’t be linked to individuals by third parties, such as insurers, but can be traced back by the host researchers, if need be.
More robust protection is provided by “fully anonymized” data, which has all personalized data removed and any path
back to the original data deleted, making it extremely hard to trace the data back to an individual. However, even this is not foolproof. For example, in many cases, a large dataset may include people with rare medical conditions or only small numbers of specific ethnic minorities that machine-learning algorithms could use to identify, within certain error margins, a particular individual.
The most frightening possibility to me is that some people may actually want to link the data to an individual. It is not far-fetched to worry that insurers, employers, or law enforcement agencies might want your personal data. Indeed, brain scans have already been used in court as evidence. As recently as February, the journal BioTechniques published an article online entitled “Inside the brain of a killer: the ethics of neuroimaging in a criminal conviction.”
A study published in Proceedings of the National Academy of Sciences in March 2017 found that, in a laboratory setting, brain scans were able to distinguish between hard-core criminal intent and simply reckless behavior. But a writer in Science cautioned that the approach was “far from being ready for the courtroom.”
Calhoun and his co-authors side more with the rights of individuals than with the researchers and institutions that collect data from them, but they seek a balance that will both protect their privacy and allow important science to advance. They call for the research community to work together with attorneys and ethicists to determine how best to make important advances in medical research, while protecting the data of human subjects.
There is no question that data sharing will entail some level of risk. A leak of sensitive information might prove harmful to an individual. But the authors worry that concerns have outpaced reality. “While we do not intend to minimize the importance of data security,” they write, “there is a certain fear that has emerged regarding data sharing where it has become greater than life.” Instead of being real monsters to worry about, they are more like imaginary “monsters under the bed.”
The best path forward, the authors say, is for researchers, through discussions with participants and information provided on consent forms, to let participants decide whether they are willing to have their data shared, and under what circumstances. There may be no direct benefit to them, but those willing to share would be doing science and their fellow citizens a seminal service.
—
Phil Boffey is former deputy editor of the New York Times Editorial Board and editorial page writer, primarily focusing on the impacts of science and health on society. He was also editor of Science Times and a member of two teams that won Pulitzer Prizes.
The views and opinions expressed are those of the author and do not imply endorsement by the Dana Foundation.
This article first appeared in the Spring 2021 issue of our Cerebrum magazine. Click the cover for the full e-magazine.