advertisement-vertical Download Proto magazine app
Social Icons

The Problem of Replication

icon-pdfpdf icon-printprint
share: digg.com del.icio.us facebook.com

Yet while few may question the importance of replication, the technology and complexity of scientific experimentation today can make it enormously challenging. “A lot of techniques in my laboratory take a long time to master, and there’s a steep learning curve before we can reproduce even our own results,” says Fang, a microbiologist. “So another lab saying ‘We’re going to repeat the high-energy UV laser footprinting you just did on those nucleoprotein complexes’ is going to find it very daunting—and that’s just one component of the experiment.”

But the growing complexity of research methodologies is hardly the only reason replication is no longer a routine part of scientific discovery. “A big factor is that scientists have strong incentives to introduce new ideas, but weak ones to confirm the validity of old ideas,” says Brian Nosek, a psychologist at the University of Virginia. “Innovative findings produce rewards of publication, employment and tenure. Replicated findings produce a shrug.”

In fiscal year 2012, the NIH’s reported annual research funding of $31 billion was down by about 17% (adjusted for inflation) from its high in 2003. The number of applicants for NIH grants has soared almost threefold, and the NIH is able to fund fewer than one in five grant proposals. New Ph.D.s must compete for both research dollars and tenure, while senior researchers worry about being able to do the work necessary to extend their careers.

Meanwhile, there may be inadequate training for the postdoctoral students who often play key research roles. And while outright fraud may be rare, it appears to be on the increase. As a percentage of all scientific articles published from January 1973 through May 2012, retractions for fraud or suspected fraud increased tenfold, according to a study Fang and his colleagues published in Proceedings of the National Academy of Sciences in October 2012. “Overt dishonesty is the extreme,” Fang says. “The broad problems of reproducibility have more to do with how the work is presented and how rigorously it has been obtained because of time pressures and the importance of getting positive results.”

Complicating debates about the reasons for low rates of replication is uncertainty about the magnitude of the problem. In a 2005 essay in Public Library of Science (PLOS) Medicine, John Ioannidis, an epidemiologist and professor at Stanford School of Medicine, argued that most published research findings are false, and used statistical models to underscore issues with how studies are conceived and designed. In 2009, Ioannidis and colleagues zeroed in on the repeatability of 18 studies of gene expression published in Nature Genetics in 2005 and 2006. Insufficient data made replication impossible for 16 of the papers.

In 2011, German pharmaceutical company Bayer HealthCare reported in the journal Nature Reviews that its scientists had been unable to reproduce nearly three-quarters of 67 published studies in cardiovascular disease, cancer and women’s health. In most cases, the inability to replicate results led to the termination of research efforts, a trend that may help explain why success rates for clinical drug trials have been declining. “Bayer HealthCare has become more cautious when working with published research targets,” says Khusru Asadullah, head of global biomarkers at Bayer’s Berlin headquarters and an author of the Nature Reviews article. “Targets now have to be better validated internally before we start big projects.”

icon-pdfpdf icon-printprint
share: digg.com del.icio.us facebook.com
Stat-arrow-gold
hed-dossier

1. “Why Most Published Research Findings Are False,” by John Ioannidis, PLOS Medicine, August 2005. In this seminal study on replication, Ioannidis, a Stanford University epidemiologist, uses statistical models and other key factors to demonstrate why the vast majority of published research findings are false.

2. “Believe It or Not: How Much Can We Rely on Published Data on Potential Drug Targets?” by F. Prinz, T. Schlange and K. Asadullah, Nature Reviews Drug Discovery, September 2011. In an industry analysis, three Bayer HealthCare scientists report that in-house experiments over four years failed to replicate two-thirds of 67 research studies in the fields of oncology, women’s health and cardiovascular diseases.

3. “Drug Development: Raise Standards for Preclinical Cancer Research,” by C.G. Begley and L. Ellis, Nature, March 29, 2012. A chronicle of scientists at American drug company Amgen who tried to replicate 53 studies they considered landmarks in the basic science of cancer—and were able to replicate only six.

Protomag on Facebook Protomag on Twitter