当前位置 >>  首页 >> 科学普及

科学普及

The cancer test

来源: 时间:2015-06-29

  A nonprofit’s effort to replicate 50 top cancer papers is shaking up labs.

  

   By Jocelyn Kaiser

 

  

     The email that arrived in Richard Young's inbox in October 2013 was polite but firm. The writer was part of a group of researchers who “are conducting a study to investigate the reproducibility of recent research findings in cancer biology.” A paper that Young, a biologist at the Massachusetts Institute of Technology in Cambridge, had published in Cell in 2012 on how a protein called c-Myc spurs tumor growth was among 50 high-impact papers chosen for scrutiny by the Reproducibility Project: Cancer Biology. The group might need help with materials and advice on experimental design, the message said. It also promised that the project would “share our procedure” to ensure “a fair replication.”

 

  Young wrote back that a European lab had already published a replication of his study. No matter, the project's representative replied, they still wanted to repeat it. But they needed more information about the protocol. After weeks of emails back and forth and scrambling by graduate students and postdocs to spell out procedures in intricate detail, the group clarified that they did not want to replicate the 30 or so experiments in the Cell paper, but just four described in a single key figure. And those experiments would be performed not by another academic lab working in the same area, but by an unnamed contract research organization.

 

  This past January, the cancer reproducibility project published its protocol for replicating the experiments, and the waiting began for Young to see whether his work will hold up in their hands. He says that if the project does match his results, it will be unsurprising —the paper's findings have already been reproduced. If it doesn't, a lack of expertise in the replicating lab may be responsible. Either way, the project seems a waste of time, Young says. “I am a huge fan of reproducibility. But this mechanism is not the way to test it.”
   

 

     That is a typical reaction from investigators whose work is being scrutinized by the cancer reproducibility project, an ambitious, open-science effort to test whether key findings in Science, Nature, Cell, and other top journals can be reproduced by independent labs. Almost every scientist targeted by the project who spoke with Science agrees that studies in cancer biology, as in many other fields, too often turn out to be irreproducible, for reasons such as problematic reagents and the fickleness of biological systems. But few feel comfortable with this particular effort, which plans to announce its findings in coming months. Their reactions range from annoyance to anxiety to outrage. “It's an admirable, ambitious effort. I like the concept,” says cancer geneticist Todd Golub of the Broad Institute in Cambridge, who has a paper on the group's list. But he is “concerned about a single group using scientists without deep expertise to reproduce decades of complicated, nuanced experiments.”

 

  Golub and others worry that if the cancer reproducibility project announces that many of the 50 studies failed its test, individual reputations will be damaged and public support for biomedical research undermined. “I really hope that these people are aware of how much responsibility they have,” says cancer biologist Lars Zender of the University of Tübingen in Germany.

 

  Timothy Errington, the reproducibility effort's manager at the nonprofit Center for Open Science in Charlottesville, Virginia, knows the scrutiny has unsettled the community. But, he says, the project is working hard to make sure that the labs have all the details they need to match the original studies. The effort will ultimately benefit the field, he says, by gauging the extent of the reproducibility problem in cancer biology. “Some see this as a threat, a way to disprove something. That's not what this is about.”

 

  CONCERNS THAT MUCH PRECLINICAL research can't be reproduced are not new, but the spotlight turned to cancer biology 3 years ago, when a commentary in Nature reported that scientists from the biotech company Amgen could reproduce only six of 53 high-profile cancer papers. (Another firm, Bayer, had reported a 79% failure rate for a set of mostly cancer studies in 2011.) The Amgen piece argued that irreproducible data contributed to high drug development costs and failed clinical trials. Indeed, a year earlier Amgen had dropped an entire research effort to find drugs targeting a cancer protein called STK33 after it could not confirm key results in a Cell paper.

 

  To the frustration of many, the commentary's co-authors Glenn Begley, who had left Amgen to become a consultant, and Lee Ellis of the University of Texas MD Anderson Cancer Center in Houston said confidentiality agreements with some labs barred them from sharing data from their replication efforts or even the titles of the papers. However, Begley, now at TetraLogic Pharmaceuticals in Malvern, Pennsylvania, wrote a follow-up commentary in Nature describing the six main problems he found, including a lack of proper controls, faulty statistics, and failure to validate reagents.

 

  At about the same time, cancer biologist Elizabeth Iorns launched the Reproducibility Initiative, which offered to replicate life sciences experiments for a fee through a network of 1000 contract labs she had established, called Science Exchange (Science, 31 August 2012, p. 1031). Iorns was inspired by the fact that drug companies often used her network for replications, seeing a chance to avoid wasting money pursuing shaky science. But Iorns had to seek funding to examine academic research, starting with cancer biology. That led her to the Laura and John Arnold Foundation, which introduced her to the Center for Open Science, founded by University of Virginia (UVA) psychologist Brian Nosek to promote transparency in science.

 

  Their collaboration was a new direction for Nosek's center, which had started out with a project to replicate psychology papers by recruiting volunteers from academia (Science, 30 March 2012, p. 1558). But for the cancer research replications, which involved messy “wet” biology, organizers decided to pay labs belonging to the Science Exchange—contract labs or fee-based support labs at universities known as core facilities.

 

  Some authors of the top 50 papers suggest that it's a conflict of interest for Iorns's own company to be getting the business. Iorns responds that her firm is not profiting, because it is donating its roughly 5% fee to the project. She says that organizing replication efforts through Science Exchange is faster and cheaper than through academic collaborations, and the results are less likely to be biased, because the scientists doing the work needn't worry about offending their peers with a negative result.

 

  Errington was hired to run the cancer replication project just after completing a Ph.D. in microbiology at UVA. Iorns and a colleague had compiled a list of the 50 most widely cited cancer biology studies from 2010 to 2012 (see table, p. 1412). The topics reflect the field's hottest areas, from new protein drug targets in tumors to the role of gut microbes in cancer. With $1.3 million from the Arnold foundation—which works out to $26,000 per paper, sufficient to replicate key experiments from each paper, Iorns says—and donations of reagents from companies, they sent off their first emails to corresponding authors and posted their progress online.

 

  Early on, Begley, who had raised some of the initial objections about irreproducible papers, became disenchanted. He says some of the papers chosen have such serious flaws, such as a lack of appropriate controls, that attempting to replicate them is “a complete waste of time.” He stepped down from the project's advisory board last year.

 

  Amassing all the information needed to replicate an experiment and even figure out how many animals to use proved “more complex and time-consuming than we ever imagined,” Iorns says. Principal investigators had to dig up notebooks and raw data files and track down long-gone postdocs and graduate students, and the project became mired in working out material transfer agreements with universities to share plasmids, cell lines, and mice.

 

  To add rigor to the replications, the group decided to publish a peer-reviewed protocol for each experiment before the work began, through a partnership with the open-access journal eLife. This has enabled the original authors and outside scientists to provide critical input, Errington says. Charles Sawyers, a researcher at the Memorial Sloan Kettering Cancer Center in New York City and an eLife senior editor, says the journal's editors felt that participating would “ensure that the reproducibility experiments are well designed and that the results are as interpretable as possible.” So far, the project has published 11 protocols. It hopes to release the first experimental results in eLife this fall and all 50 by the end of 2017.

 

  ALTHOUGH ERRINGTON SAYS many labs have been “excited” and happy to participate, that is not what Science learned in interviews with about one-fourth of the principal investigators on the 50 papers. Many say the project has been a significant intrusion on their lab's time—typically 20, 30, or more emails over many months and the equivalent of up to 2 weeks of full-time work by a graduate student to fill in protocol details and get information from collaborators. Errington concedes that a few groups have balked and stopped communicating, at least temporarily.

 

  For many scientists, the biggest concern is the nature of the labs that will conduct the replications. It's unrealistic to think contract labs or university core facilities can get the same results as a highly specialized team of academic researchers, they say. Often a graduate student has spent years perfecting a technique using novel protocols, Young says. “We brought together some of the most talented young scientists in the area of gene control and oncology to do these genomics studies. If I thought it was as simple as sending a protocol to a contract laboratory, I would certainly be conducting my research that way,” he says.

 

 

   Jeff Settleman, who left academia for industry 5 years ago and is now at Calico Life Sciences in South San Francisco, California, agrees. “You can't give me and Julia Child the same recipe and expect an equally good meal,” he says. Settleman has two papers being replicated.

 

  Academic labs approach replication differently. Levi Garraway of the Harvard University–affiliated Dana-Farber Cancer Institute in Boston, who also has two papers on the project's list, says that if a study doesn't initially hold up in another lab, they might send someone to the original lab to work side by side with the authors. But the cancer reproducibility project has no plans to visit the original lab, and any troubleshooting will be limited to making sure the same protocol is followed, Errington says. Erkki Ruoslahti of the Sanford-Burnham Medical Research Institute in San Diego, California, has a related worry: The lab replicating one of his mouse experiments will run that experiment just one time; he repeated it two or three times.

 

  The scientists behind the cancer reproducibility project dismiss these criticisms. Iorns says the contract labs and core facilities “are highly trained” and often “have much more expertise” than the original investigators in the technique at hand. If a recipe has enough detail, two different cooks should be able to produce the exact same meal, she says.

 

  She adds that the project will generate a vast data set that will allow those interested in reproducibility to examine “all kinds of variables” that determine whether an experiment can be repeated. And she argues that the time and effort it requires of the targeted researchers shows that their papers are short on key information. Researchers should be reporting every detail of an experiment when they publish, down to catalog and lot numbers for reagents and underlying data sets—if not in the paper, through links to other sites, she says: “The biggest lesson so far is that we should change the way that we publish our results.”

 

  But many cancer biologists say the solution is not another Amgen-like paper labeling many cancer studies as irreproducible—this time with the titles of the papers and their lead investigators. Instead, journals and reviewers should require more rigorously designed experiments and demand that key conclusions be adequately supported, Settleman says. Many journals are already beefing up review criteria, and the National Institutes of Health is taking steps to bolster reproducibility, for example, by asking study sections to scrutinize a proposal's experimental design. (On page 1422, Nosek and others, including Science's editor-in-chief, suggest journal standards to increase reproducibility.)

 

  Iorns agrees that such reforms are needed, but so is scrutiny of these high-profile papers, which are shaping the search for new cancer treatments. Instead of worrying about damaged reputations and threats to federal funding, the research community “should be worried about the consequences right now,” she says—that pharmaceutical companies can't reproduce key cancer papers. “All we're saying is, there may be issues with being able to repeat this experiment in another lab. Hiding that is really the biggest mistake.”

附件下载: