It has become a complaint heard all too often—much academic research cannot be reproduced. Publishers are taking notice. On 25 April, Nature and its family of research journals introduced new measures that go into effect this month. They aim to improve the consistency and quality of life-science articles by more thoroughly examining statistics and encouraging transparency in the reporting of experimental details. Nature will abolish space restrictions on methods sections, and ask authors to complete a checklist to help them and reviewers identify details that are crucial for replication.

Neuroscientists welcomed the changes but wondered if they go far enough. Some thought the checklist simply articulates standards one already expects from top journals. “What Nature is doing here is a good start,” said Guoping Feng of the McGovern Institute of Brain Research at MIT. Others pointed out that there are many other reasons for irreproducibility than what Nature is addressing with the current guidelines.

Concern over reproducibility in research has grown in recent years, fueled in part by declining success rates in clinical trials. That has led some to question the validity of preclinical models. More broadly, a 2011 study by researchers at Bayer HealthCare, Berlin, Germany, helped galvanized interest in the problem. They reported that out of 67 in-house projects, data from only about a fourth agreed with published findings (see Prinz et al., 2011). At the time, industry researchers in other companies said that Bayer’s experience mirrored theirs.

"From talking to people in industry, I sense they are frustrated with their inability to reproduce a lot of data from academia, even data that appear in top journals," Tony Wyss-Coray, Stanford University, California, told Alzforum. "In our field, for example, they take in models that are published and are supposed to have neurodegeneration, but when they try to reproduce that, they are unable to," he added.

Both journal editors and scientists contacted by Alzforum cautioned that the issue is complex. Christian Haass of Germany’s Ludwig Maximilians University, Munich, suggested that education needs to change to ensure that young scientists get a firmer grounding in good scientific practice. Others agreed, claiming that sources of irreproducibility can be as basic as inadequate controls, for example, to prove the specificity of an antibody. Researchers insisted that principal investigators must ultimately take responsibility and stay closer to the original data day to day.

There are many reasons for why papers are irreproducible. They range from innocent mistakes to outright fraud. Showcasing an example of the extreme, rare end of the spectrum, the April 26 New York Times ran a stunning article by a reporter at Science magazine. It delved into the psyche of a leading psychology professor in the Netherlands who had fabricated research for years and, according to the blog Retraction Watch, has withdrawn more than 50 papers.

Many scientists believe that lesser problems are widespread. Feng said that in the course of discussing papers in journal club, his research group periodically comes across what appear to be erroneous data or even potential fraud. Examples include inappropriate slicing of gel bands and re-use of images—the types of problems that were chronicled on the flagrantly irreverent website research-fraud.org until legal threats forced it down in January 2013. Feng suggested that to avoid such problems, principal investigators work more closely with the original data, and that journals request original data for each figure/conclusion.

Workshops convened by the National Institutes of Health to address the issues (see Landis et al., 2012) served as a basis for the new Nature guidelines. These guidelines focus on statistics, blinding, and other experimental design parameters that can bias or weaken findings.

Katrina Kelner, managing editor of Science's research journals, is working on the issue as well. "We are deeply concerned about this issue," she told Alzforum. "Journals can play a part and we are happy to set high standards, but we get the research a little bit late in the game," she added. "Many of the necessary procedures need to be built into the experimental design from the start." Some scientists, however, feel that journals should pay more attention to reviewers’ comments, particularly when they relate to technical details. Some noted that journal editors have overridden rejections they recommend because of basic scientific error. Science Translational Medicine will issue new guidelines toward the end of this month or early June.

Other researchers said that the publish-or-perish environment, which rewards novel findings over attempts to replicate, aggravates the situation. "The pressure is incredibly high to publish in high-impact journals,” noted Haass. "That influences grants, careers, and salaries." Haass believes this could be part of the problem. "Addressing scientific questions in a way to get high-impact papers can bias research, and that is highly dangerous for the field," he said. Others noted that the push to publish novel data means researchers are unwilling to even try to reproduce others' findings. "If the way we get funded does not change, and there is no expectation that it will, then we are driven by creating new findings," said Wyss-Coray. This means principal investigators cannot motivate students or early-career scientists to reproduce results. If it is expensive or time consuming, then the likelihood of it being done shrinks even more, he noted.

What to do? "I think it would be really helpful to engage industry," said Wyss-Coray. "They are not the ultimate arbiters of truth, but there are many reasons why data are not reproducible, and industry could be really helpful in figuring that out," He noted that that failing to reproduce data does not necessarily mean they are wrong. Animals may have different macrobiota and respond differently to drugs, for example. Another way to approach this in the AD field would be to have different labs or clinical research organizations test the same compounds in preclinical settings, he suggested. For their part, Science Exchange in Palo Alto, California, in August 2012 launched the Reproducibility Initiative to facilitate replication studies. One published attempt to lift standards for mouse preclinical studies addresses their main sources of error, and is being adopted by some labs (Shineman et al., 2011).

"Ultimately, we are in science to find the truth," said Haass. "We have fantastic research, and we have weaker research, and while reproducibility is a serious problem, in the end we hope the overall process is self-cleansing."—Tom Fagan and Gabrielle Strobel.

Comments

  1. It is great to see this issue brought out of the closet. It is, of course, a great frustration for those involved in clinical trials. But it is also a frustration for laboratory researchers. I am sure many researchers complain that students or postdocs come to them with an idea based on some paper or other, which ultimately is found to contain irreproducible data. Much laboratory-based research would not need to be carried out if we could rely more on published studies. It seems to me the old idea that experiments should be reproduced at least three times within the laboratory prior to publication is still of paramount importance. Perhaps then, less scientific "self-cleansing" (to use the term in the article), would need to occur post-publication.

  2. The issue of reproducibility is an important one which also plagues funding organizations that review unpublished and preliminary data in order to make funding decisions. However, a key distinction needs to be made between early-stage exploratory findings and confirmatory studies that require much more rigor. In order to encourage innovation and support novel ideas, some “risk-taking” on novel ideas is imperative; the key is transparency.

    The efforts of Nature and other scientific journals to include checklists and other measures to improve the rigor of the published studies should be applauded. This work also sets a framework for scientists on how future studies should be designed and reported. Building off of these efforts in the field, the Alzheimer's Drug Discovery Foundation will be releasing revised application instructions in June 2013. These instructions include recommendations related to target selection, design of experimental plans, a spreadsheet for chemical property information on compounds, a checklist for considerations for animal model studies based on Shineman et. al. (2011), and recommendations for clinical studies. Our goal is to improve the transparency and evaluation process of the proposals we review, while encouraging the submission of novel, early-stage ideas and promoting rigor in confirmatory assessment.

    References:

    . Accelerating drug discovery for Alzheimer's disease: best practices for preclinical animal studies. Alzheimers Res Ther. 2011;3(5):28. PubMed.

Make a Comment

To make a comment you must login or register.

References

Paper Citations

  1. . Believe it or not: how much can we rely on published data on potential drug targets?. Nat Rev Drug Discov. 2011 Sep;10(9):712. PubMed.
  2. . A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 2012 Oct 11;490(7419):187-91. PubMed.
  3. . Accelerating drug discovery for Alzheimer's disease: best practices for preclinical animal studies. Alzheimers Res Ther. 2011;3(5):28. PubMed.

External Citations

  1. checklist
  2. article
  3. Retraction Watch
  4. Reproducibility Initiative

Further Reading

Papers

  1. . Six red flags for suspect work. Nature. 2013 May 23;497(7450):433-4. PubMed.