Irreproducible results pose an enduring problem that plagues the scientific research community and slows progress, but what can be done about them? The sources and potential fixes are myriad, but in a perspective published on June 26 in Science, researchers at the Center for Open Science (COS) in Charlottesville, Virginia, focused on journals. The reason? “Publications are the currency of science,” said Brian Nosek, the director of COS and first author of the paper. A psychology researcher at the University of Virginia, Nosek, along with more than three dozen other researchers, published a list of eight guidelines that journals could implement to boost research transparency and reproducibility. So far, more than 100 journals and 30 organizations have endorsed the guidelines, but it remains to be seen how many will put them into action.
Lack of reproducibility in scientific studies has come under the spotlight in recent years, as several large-scale efforts to replicate preclinical findings have produced dismal results (see Prinz et al., 2011; Begley and Ellis, 2012; Arrowsmith 2011; Vasilevsky et al., 2013). A recent analysis conducted by Leonard Freedman and colleagues at the Global Biological Standards Institute in Washington, D.C., estimated that half of the $56 billion spent on preclinical research every year in the United States funds experiments that cannot be replicated (see Freedman et al., 2015). Freedman and colleagues proposed tackling the problem from the bottom up—by training graduate students and postdocs in proper experimental design, and implementing standards for reagents and protocols more akin to those employed in clinical research.
In their perspective, which is freely available at Science, Nosek and colleagues present a complementary approach aimed at increasing reproducibility from the top down. The guidelines are the result of a November 2014 meeting of the Transparency and Openness Promotion (TOP) committee. They encourage journals to adopt a range of standards as part of the publication process. Each of the eight standards has three levels of stringency that journals could enforce (all of them better than level zero, which does nothing to promote transparency). The first five standards concern the proper sharing of citations, data, analytic methods, research materials, and experimental design. At the highest stringency (level 3), researchers would be required to submit their data and/or methods into public repositories, and some experiments would have to be repeated by independent researchers prior to publication.
The final three standards focus on mechanisms to promote the sharing of negative data, whether or not it is formally published. Two—preregistration of studies and preregistration of analysis plans—dictate that researchers submit their entire study, or simply the experimental design of a future study, to the journal or a public repository prior to submission. If the study does not get accepted for publication, then results would still be available online, which would allow researchers to access negative data. Preregistration of an analysis plan would allow readers to cross-reference the author’s original plan with the results ultimately published. The final guideline ensures the publication of negative results in the form of “registered reports,” which are confirmatory studies published after the original findings.
A more detailed description of each guideline, as well as a list of journals and organizations that have so far endorsed them, is available on the Center for Open Science website.
If implemented widely across journals, the guidelines could go a long way to improving transparency and reproducibility in research, Nosek and several commentators agreed. However, the big question is: Will journals move forward on this? Not everyone is convinced. “My suspicion is that everyone will welcome these guidelines, which are generally sensible, but that nothing will change,” commented John Hardy of University College London. Nosek pointed out that until several top journals put the guidelines into action, some researchers may choose to submit their work to journals with less-stringent guidelines. As more journals execute the changes, pressure will mount for others to do the same. Some journals already have taken steps to ensure transparency and replicability of studies that grace their pages; for example, Nature implemented its own guidelines in 2013 (see May 2013 news). Patricia Mabry from the Office of Behavioral and Social Sciences Research at the National Institutes of Health (NIH) felt that journals will slowly adapt the guidelines. “Over time, as the overall culture in science changes (which we believe it already is doing), we expect journals will become comfortable requiring additional measures by authors and reviewers to support reproducibility,” she wrote (see full comment below). Mabry was involved in drafting the guidelines.
In addition to journals, funding agencies are another powerful potential driver of transparency. Nosek said that several funding agencies endorsed the TOP guidelines, which can be implemented much the same way in the grant application process as they are in journal publication. The National Institutes of Health, together with Science and Nature Publishing Group, produced a set of guidelines for publishers last year, similar to TOP, and the funding agency now lists the TOP guidelines on its website as well. NIH is heading its own initiatives to promote reproducibility among the researcher community (see Jan 2014 news).
Freedman commented that the TOP guidelines are an important step toward boosting reproducibility, but said he favors the bottom-up approach that focuses on researchers. In part with funding from the NIH, Freedman is leading efforts to establish programs for graduate students and postdocs. These courses cover aspects of experimental design and use of reagents, which should help standardize techniques across labs of a similar discipline, thus boosting the quality of data submitted to journals in the first place. He added that journals may not have sufficient incentives to implement changes that cost them more time and money, so instilling change at the research level could be most effective.
John Trojanowski of the University of Pennsylvania agreed that implementing TOP guidelines, especially at the highest level, will be costly and incentives are lacking. However, he added that the research community seems primed for the challenge. “In my view, the Alzheimer’s Disease Neuroimaging Initiative (ADNI) comes the closest among NIH-funded research studies to approaching but not completely meeting level 3 TOP guidelines,” he wrote. “Thus, ADNI shows that the scientific culture, at least in the AD research arena on AD biomarkers, is willing to embrace the concepts outlined in this essay, but the cost is high.”
Finally, academic institutions will also need to change their policies to enhance reproducibility, commented Bruce Lamb of the Cleveland Clinic in Ohio. “Publication in the ‘high-impact’ and ‘high-profile’ journals is so highly valued by most committees on promotion and tenure, and reviewers of grant applications, that this drives researchers to not focus on publishing ‘negative’ findings, even though these are also critical for moving science forward,” he wrote. “Reforms in the journals regarding publication requirements are unlikely to completely solve the problem without considerable and parallel reforms within institutions and granting agencies.” —Jessica Shugart
- Guidelines at Nature Aim to Stem Tide of Irreproducibility
- National Institutes of Health Tackles Irreproducibility Problem
- Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets?. Nat Rev Drug Discov. 2011 Sep;10(9):712. PubMed.
- Begley CG, Ellis LM. Drug development: Raise standards for preclinical cancer research. Nature. 2012 Mar 28;483(7391):531-3. PubMed.
- Arrowsmith J. Trial watch: Phase II failures: 2008-2010. Nat Rev Drug Discov. 2011 May;10(5):328-9. PubMed.
- Vasilevsky NA, Brush MH, Paddock H, Ponting L, Tripathy SJ, Larocca GM, Haendel MA. On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ. 2013;1:e148. Epub 2013 Sep 5 PubMed.
- Freedman LP, Cockburn IM, Simcoe TS. The Economics of Reproducibility in Preclinical Research. PLoS Biol. 2015 Jun;13(6):e1002165. Epub 2015 Jun 9 PubMed.
- Karassa FB, Ioannidis JP. Clinical trials: A transparent future for clinical trial reporting. Nat Rev Rheumatol. 2015 Jun;11(6):324-6. Epub 2015 May 5 PubMed.
- Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, Buck S, Chambers CD, Chin G, Christensen G, Contestabile M, Dafoe A, Eich E, Freese J, Glennerster R, Goroff D, Green DP, Hesse B, Humphreys M, Ishiyama J, Karlan D, Kraut A, Lupia A, Mabry P, Madon TA, Malhotra N, Mayo-Wilson E, McNutt M, Miguel E, Paluck EL, Simonsohn U, Soderberg C, Spellman BA, Turitto J, VandenBos G, Vazire S, Wagenmakers EJ, Wilson R, Yarkoni T. SCIENTIFIC STANDARDS. Promoting an open research culture. Science. 2015 Jun 26;348(6242):1422-5. PubMed.