Since 2018, the FDA has required that U.S. clinical trial results be reported to within a year of trial completion, but this mandate is often ignored. A recent study found that less than half of U.S. trials submitted results to the site by the deadline. Industry-led trials were the most likely to be reported on time.

The FDA requires that registered U.S. clinical trials report results to within a year of completion, but recent findings suggest that often this does not happen. In a study in The Lancet, researchers found that less than half of studies met the deadline.

Efforts to improve reporting began in the 1980s, culminating in the launch of in 2000. In 2007, legislation mandated that results be posted to the site within a year, with the first trials due to report results in January 2018 after a final rule providing additional details about the law went into effect in 2017.

However, the FDA has not enforced the rule, compelling lead author Nicholas DeVito, MPH, of the University of Oxford in the UK and his colleagues to launch Trials Tracker. The idea, he explains, is “to show which trials are in compliance with this law and which aren't over time.”

Using Trials Tracker, the researchers analyzed 4,209 trials due to report results beginning in January 2018. Overall, 40.9% of trials reported results by the 1-year deadline. Additionally, 50.3% of industry sponsors reported results on time, compared with 33.8% of academic sponsors and 31.4% of government sponsors. The researchers did not specifically investigate oncology trials, but based on prior studies, “there is little reason to expect they report at substantially higher or lower rates than all trials,” DeVito says.

The results highlight the need for better compliance; indeed, the FDA has the authority to collect fines of more than $10,000 per day. “Clinical trials inform our evidence about clinical decision making, guidelines, health policy—it's really crucial that this evidence is out there,” DeVito says. Although some trials are published or presented at conferences, others—particularly negative trials—are not. Publishing data on the site does not preclude subsequent publication in most journals, a common concern for researchers.

Evan Mayo-Wilson, PhD, MPA, of the Indiana University School of Public Health in Bloomington, who was not involved in the trial, agrees. He notes that journal publications may focus only on certain aspects of trials or overstate benefits and/or understate harms. “It's important to have everything in even if there is a publication—what we get in the journal article is really the tip of the iceberg,” he says.

Leonard Saltz, MD, of Memorial Sloan Kettering Cancer Center in New York, NY, who relies on journal articles to guide his practice, admits that he was unfamiliar with this particular reporting requirement—and expects others are, too. However, the findings “certainly highlight a known and serious problem that too many clinical trials never get reported,” he says.

Janice Mehnert, MD, of the Rutgers Cancer Institute of New Jersey in New Brunswick, says that her institution has dedicated staff to help submit data, something that could be implemented at other institutions. Yet, “while I completely agree with the principle” of submitting trial data to, Mehnert adds, she thinks clinicians may be more likely to use the site to search for the existence of trials rather than trial results, and instead turning to PubMed or Google for clinical data.

Mayo-Wilson suspects the site is mainly used by people writing systematic reviews and guidelines who need comprehensive data. For DeVito, however, the site's importance goes beyond individual users. “You have to look at it from the perspective of the value that adds—it's the largest registry in the world, it's very visible, and it's a single place for a lot of trial information,” he says. “But there is still a lot of work to be done.” –Catherine Caruso