advertisement-vertical Download Proto magazine app
Social Icons

The Problem of Replication

icon-pdfpdf icon-printprint
share: digg.com del.icio.us facebook.com
Paint by numbers splattered with green paint

Sam Kaplan; Illustration and painting: Chris Malec, Lora Morgenstern, Graciela Bernal, Wesley Soriano; Prop Styling: Wendy Schelah

Last year, Lee Ellis of MD Anderson and C. Glenn Begley, former head of global cancer research at pharmaceutical company Amgen, chronicled in the journal Nature how Amgen scientists attempted to replicate 53 landmark cancer studies and found that they could confirm only six. The scientists even consulted with the original investigators, who in some cases were unable to repeat their own experiments. But because Amgen investigators were bound by confidentiality agreements, the paper left many unanswered questions. “They didn’t reveal a list of which studies they couldn’t reproduce,” Fang says.

Begley, now chief scientific officer at TetraLogic Pharmaceuticals, has since provided more details, and he published his analysis, “Six Red Flags for Suspect Work,” in Nature in 2013. “If researchers got the results they liked in the first experiment, they usually didn’t repeat it,” Begley says. Much of today’s research isn’t fudged, he says, or fraudulent: “It’s lazy and sloppy.”

New research by Ellis and a team at MD Anderson published in PLOS ONE in 2013 provided yet another perspective on the reproducibility problem. They reported that half of more than 400 respondents at the institution said they had been unable to replicate at least one published study. Seventy-eight percent of the scientists had attempted to contact the authors of the original scientific paper to identify the problem, but only one-third received a helpful response. More than 40% reported difficulties finding an outlet to publish findings that contradicted previous results. Such problems increase the likelihood that “suspect findings may lead to the development of entire drug development or biomarker programs that are doomed to fail,” the authors wrote.

One of the biggest problems, according to researchers at Oregon Health & Science University, is a lack of basic instructions for duplicating experiments. Their study, published in the journal PeerJ in 2013, examined the methods sections of several hundred articles from more than 80 journals and found that almost half of the articles fell short in identifying all of the materials used. They also noted that methods sections had no standard guidelines and varied from one journal to the next, and were often affected by space limitations.

Ellis notes another hurdle to replication: the failure to include negative data in papers. Journals don’t like to publish flawed data, but knowing an experiment sometimes failed, and why, could help other researchers when they run into trouble.

Several prominent journals, including Nature, Science and Science Translational Medicine, are now adopting guidelines to ensure the disclosure of all technical and statistical information that is crucial for reproducibility. Nature now provides more space for methods information and requires more precise information from authors. And to publish in Science, senior authors must sign off on a paper’s primary conclusions. The peer review process is also being scrutinized, with the aim of “increasing transparency,” particularly in analyzing researchers’ statistical measures, says Meagan Phelan, a spokesperson for the American Association for the Advancement of Science, which publishes Science.

Meanwhile, the Reproducibility Initiative has received $1.3 million in funding from the Laura and John Arnold Foundation to replicate key findings from 50 landmark cancer biology studies. The foundation is also financing a related effort, the Reproducibility Project, which Brian Nosek helped establish, that is bringing together more than 180 academic psychologists through a network called the Center for Open Science to replicate 100 papers published in three prominent journals.

icon-pdfpdf icon-printprint
share: digg.com del.icio.us facebook.com
Stat-arrow-gold
hed-dossier

1. “Why Most Published Research Findings Are False,” by John Ioannidis, PLOS Medicine, August 2005. In this seminal study on replication, Ioannidis, a Stanford University epidemiologist, uses statistical models and other key factors to demonstrate why the vast majority of published research findings are false.

2. “Believe It or Not: How Much Can We Rely on Published Data on Potential Drug Targets?” by F. Prinz, T. Schlange and K. Asadullah, Nature Reviews Drug Discovery, September 2011. In an industry analysis, three Bayer HealthCare scientists report that in-house experiments over four years failed to replicate two-thirds of 67 research studies in the fields of oncology, women’s health and cardiovascular diseases.

3. “Drug Development: Raise Standards for Preclinical Cancer Research,” by C.G. Begley and L. Ellis, Nature, March 29, 2012. A chronicle of scientists at American drug company Amgen who tried to replicate 53 studies they considered landmarks in the basic science of cancer—and were able to replicate only six.

Protomag on Facebook Protomag on Twitter