advertisement-vertical Download Proto magazine app
Social Icons

The Problem of Replication

icon-pdfpdf icon-printprint
share: digg.com del.icio.us facebook.com
In progress paint by numbers

Sam Kaplan

For anyone hoping to see progress in the fight against Alzheimer’s, the failure of the follow-up research was disappointing. But it also points to the necessity and challenges of independently validating published research findings. While reproducibility is considered a bedrock of scientific discovery, there has been growing concern about the quality of recent studies. “Data reproducibility means that the seminal findings of a paper can be reproduced in any qualified lab that has appropriate resources and expertise,” says Lee Ellis, a surgeon and researcher at the University of Texas MD Anderson Cancer Center in Houston. “If you try to reproduce all of the findings in a paper, you’re likely to find some divergent outcomes, but the point of the paper should remain the same.”

But Ellis and others who have explored these issues have found that medical research, including seemingly groundbreaking work, is reproducible less than half the time. “The unspoken rule is that at least 50% and more like 70% of the studies published even in top-tier academic journals can’t be repeated,” says Bruce Booth, a partner at Atlas Venture, a venture capital firm in Boston. “Everyone recognizes reproducibility as a big problem,” says Elizabeth Iorns, a cancer researcher in Palo Alto, Calif., and chief executive of Science Exchange, an online marketplace for scientific resources and expertise.

Many factors contribute to the low odds of reproducibility. The original experiments may have been poorly designed, or there could be problems with how results were analyzed. The trend may also be a symptom of a scientific community in which the job market and funding are tighter than ever, and researchers must publish or perish, leading to a lack of rigor in their research. “It is a dysfunctional scientific climate,” says Ferric Fang, a professor at the University of Washington School of Medicine and editor-in-chief of the journal Infection and Immunity. And because journals favor original research, scientists have little incentive to pursue replicative work.

As intractable as those issues may seem, there are compelling reasons to address them. Pharmaceutical and biotechnology companies depend on academic research when developing new drugs, and erroneous studies waste time and money. “Not being able to rely on research results has made early-stage investing harder,” says Booth, who is an advisor to the Reproducibility Initiative, a network launched by Iorns and other scientists to help researchers independently validate study findings. The National Institutes of Health, meanwhile, has established pilot programs to address replication problems, and some leading science journals are raising the bar on their standards for publication. “Everyone is asking whether this is something we can fix, but it’s clear there are no simple answers,” Fang says.

Repeated experimentation has always been a foundation of scientific discovery. In the 17th century, Robert Boyle, considered the first modern chemist, argued that if findings were to be credible and reliable, they had to be based on methods that independent researchers could learn, assess and replicate. Three centuries later, Austrian philosopher Karl Popper, writing in The Logic of Scientific Discovery in 1934, asserted that “non-reproducible single occurrences are of no significance to science.”

icon-pdfpdf icon-printprint
share: digg.com del.icio.us facebook.com
Stat-arrow-gold
hed-dossier

1. “Why Most Published Research Findings Are False,” by John Ioannidis, PLOS Medicine, August 2005. In this seminal study on replication, Ioannidis, a Stanford University epidemiologist, uses statistical models and other key factors to demonstrate why the vast majority of published research findings are false.

2. “Believe It or Not: How Much Can We Rely on Published Data on Potential Drug Targets?” by F. Prinz, T. Schlange and K. Asadullah, Nature Reviews Drug Discovery, September 2011. In an industry analysis, three Bayer HealthCare scientists report that in-house experiments over four years failed to replicate two-thirds of 67 research studies in the fields of oncology, women’s health and cardiovascular diseases.

3. “Drug Development: Raise Standards for Preclinical Cancer Research,” by C.G. Begley and L. Ellis, Nature, March 29, 2012. A chronicle of scientists at American drug company Amgen who tried to replicate 53 studies they considered landmarks in the basic science of cancer—and were able to replicate only six.

Protomag on Facebook Protomag on Twitter