Scientists showcased results from a pair of open-source computational cancer research challenges at the 7th annual DREAM (Dialogue for Reverse Engineering Assessments and Methods) conference in San Francisco, CA.

Following an approach that's already widespread in such fields as astronomy and aviation, cancer scientists recently showcased results from a pair of open-source computational research challenges that drew input from investigators worldwide. The results were presented at the seventh annual DREAM (Dialogue for Reverse Engineering Assessments and Methods) conference in San Francisco, CA, in November.

Open-source, or crowdsourcing, challenges aim to solve specific research problems by exploiting the collective wisdom and resources of the scientific community.

The first challenge, sponsored by the National Cancer Institute (NCI), tasked researchers with developing a computational model for ranking the response of breast cancer and lymphoma cell lines to drug treatment. A total of 51 research teams participated, with the winners hailing from Aalto University in Helsinki, Finland, and the University of Texas Southwestern Medical Center in Dallas.

In a second challenge, still ongoing and sponsored by Sage Bionetworks, in Seattle, WA, 354 research teams are developing models for predicting breast cancer survival based on clinical and genomic data.

Dan Gallahan, PhD, deputy director of the NCI's Division of Cancer Biology, says crowdsourcing augments traditional research, which tends to be more open ended and constrained by publication priorities. “What we get from these challenges are solutions to specific scientific problems,” he says. “Ideally, as scientists refine these models, we'll be able to use algorithms for prescribing specific drugs or drug combinations based on a patient's molecular profile.” NCI incentivized scientists with a guarantee that the winning model would be published in Nature Biotechnology.

The winners of the Sage challenge have been promised a publication in Science Translational Medicine about their computational model. Sage Bionet-works has been ranking models with an accuracy score that appears on a public leaderboard. This method allows teams to compare how their efforts stack up. Additionally, because the scores link back to a model's publicly available underlying code, scientists can combine analytical approaches and build off each other's work, explains Thea Norman, PhD, Sage's director of strategic development.

“We had one person with a strong clinical background borrow code from someone with a background in machine learning, and the model from that collaboration scored highest on the leaderboard,” Norman says.

For more news on cancer research, visit Cancer Discovery online at