An eye-opening investigation by Charles Piller for Science found evidence of research misconduct, including multiple instances of image manipulation and other data anomalies, from prominent neuroscientist and top NIH official Eliezer Masliah. This finding has called dozens of research papers into question.
The National Institutes of Health conducted its own investigation, which ultimately found Masliah — former head of the division of neuroscience at the National Institute on Aging and one of the leading researchers in Alzheimer’s disease and neurodegenerative disorders — had conducted “falsification and/or fabrication involving re-use and relabel of figure panels representing different experimental results in two publications,” according to a statement from the agency.
In an interview, Piller explained how the misconduct may go even deeper, and offered advice for journalists looking to more thoroughly vet studies or key researchers like Masliah.
This conversation has been lightly edited for clarity and brevity.
What made you take notice of PubPeer comments and concerns about Masliah?
This came out of a much bigger look at PubPeer done in preparation for my book, associated with Alzheimer’s and adjacent neuroscience research. I used PubPeer as one point of departure for a larger look at the field and individual scientists.
Generally, the first and last authors have the biggest influence on a study; the last usually being the the principal investigator, and the first author taking the biggest share of responsibility for conducting the experiment and for the writing. That’s a good rule of thumb to think about how to assess responsibility for possible misconduct, at least initially.
There wasn’t really a huge amount on PubPeer about Masliah, but there was enough to say, “Okay, he’s an important person in the field – not just by virtue of his post at the National Institute on Aging, but because he had published hundreds of papers in neuroscience associated with Alzheimer’s and Parkinson’s.” [Masliah’s] work was not just voluminous. It was highly influential by a number of measures.
This is Masliah’s life’s work. Why would somebody with his reputation take this kind of risk?
I’m not his psychologist, but several factors seem common in scientific misconduct. It would be a mistake to underestimate the human potential for rationalization, especially if a person believes deeply in the importance of what they’re doing.
Some scientists seem to think that making changes to images that make them “prettier” — that is, changing a scientific illustration in small ways so that it looks more clear or aesthetically pleasing — is harmless. I think most scientists and journalists would agree that such alternations are improper. After all, science is usually imperfect. But it’s sufficiently common that some scientists view it as an accepted practice.
The next step from prettying something up is changing an image so that it more clearly supports the experimental premise, but doesn’t fundamentally change the outcome. It’s only a step or two down that path to changing images in a way that fundamentally alters results to fit the experimental hypothesis.
There’s a phenomenon where people feel that they’re so sure of the importance and certainty of their ideas that they think that “fixing” data or images that don’t quite fit their hypotheses seems acceptable. That, of course, is frank scientific misconduct when it occurs.
Also, there have been decades of complacency by institutional authorities that manage and monitor the research enterprise — funders, journals and academic institutions. They’re not doing a very good job at monitoring potential misconduct. We’ve seen it over and over again.
How common do you think this type of misconduct really is?
I think the vast majority of scientists are honest and deeply committed to operating in a principled and scientifically verifiable way. But misconduct and bad behavior is common in science. No one knows how common, but to me it seems clear that it occurs at the same rate as in other walks of life. Science has a lot of problems. Does it have more problems than engineering or plumbing or law or other fields? I don’t think so, but I think science has had some difficulty facing up to the integrity issue.
It sounds like the peer review process itself broke down.
There is some truth to that. Peer reviewers are not asked or trained to evaluate studies for potential misconduct, and they do not generally have the skill, the experience and the knowledge to examine images for doctoring. Peer reviewers have said this to me on a number of occasions.
This is a problem that is being faced up to, to a degree, by the journals. Some of them are starting to implement automated AI programs that examine images. But those programs only take you so far. It requires a human being to validate the results they give you. That can be costly — cutting into publishers’ profits.
While AI has the potential to search for manipulated images, it can also create manipulated images. How can reporters, reviewers and others even know what’s real?
The experts with whom I have spoken are terrified about it, because the potential for creating either data sets or images whole cloth and using those to support a scientific idea is certainly present. I think there’s a lot of mischief makers — such as those behind “paper mills” — out there who might engage in that. On the other hand, I doubt that more than a tiny proportion of scientists would ever imagine doing anything so strange and damaging to the field.
We have such a problem with science being trusted right now. How do these types of stories further erode the trust of people on the receiving end?
I’ve done many stories along these lines, and I lose sleep every time over that very issue. I think scientific research is so vitally important to human development and to drug discovery and to medicine that it’s very disturbing when confidence in medical professionals and scientists is adversely affected by these kinds of problems and scandals. I’m very concerned when a few pundits and commentators make ridiculous extrapolations — “if some studies were faked, all science is suspect” — which has happened in response to some of my articles.
It’s regrettable that that happens, but as journalists, we don’t have any alternative to examining potential misconduct when it can have a disastrous effect on the course of research, the spending of precious and limited research funds, and on patient safety and the safety of clinical trial participants.
We must examine these issues and expose potential wrongdoing when it’s apparent. Think about the alternative: If such problems are exposed by people who don’t have a deeper understanding of science and the scientific research process, the messages they send out to the public can be far worse and far more damaging to the scientific enterprise.
The institutions that should be taking responsibility by being more vigilant and more responsible are the journals, the funders, the institutions where these scientists work, and federal agencies. They should be doing a better job of monitoring research and addressing potential misconduct when they see it.
What advice do you have for journalists to thoroughly vet a clinical study?
If you’re challenging the veracity of something, you need to take great care to do it with a fair-minded attitude and to give the benefit of doubt to the scientist. Because of the proliferation of image manipulation in science, it’s beneficial to add this as part of your standard due diligence. Here are some things I always do:
- Check for potential financial conflicts of interest.
- Understand the relationships that scientists have with each other. Try to find colleagues of the scientist whose work you are examining who might have insight about the person and lab culture, as well as experts who know the field and can assess the importance of apparent misconduct in a more disinterested way.
- Have a sense of the history of a scientist’s work and funding. One of the questions people should ask now is: Does PubPeer show a history of being challenged for possible data doctoring or image manipulation? Have they experienced many retractions or expressions of concern by journals, or corrections that would raise an eyebrow?
I certainly would never make a claim of potential image manipulation without going to unimpeachable, or at least very experienced, sources. That means both forensic image analysts and scientists who have an understanding of the field and can interpret the meaningfulness of a finding.
In thinking about possible image doctoring, keep these things in mind: You need to read scientific studies carefully because the context of the images is key to understanding if something might be a simple error versus an egregious act. I don’t personally do forensic image analysis. I leave that to people who are deeply experienced, including with software tools that can assist in the process — almost always necessary to make a firm judgment of a dubious image.
Piller’s book about Masliah and others, Doctored: Fraud, Arrogance, and Tragedy in the Quest to Cure Alzheimer’s is due out Feb. 4, 2025.





