Effort is being expended in investigating efficiency measures (i.e., doing trials right) through achievement of accrual and endpoint goals for clinical trials. It is time to assess the impact of such trials on meeting the critical needs of cancer patients by establishing effectiveness measures (i.e., doing the right trials). Clin Cancer Res; 18(1); 3–5. ©2011 AACR.

Commentary on Schroen et al., p. 256

In this issue of Clinical Cancer Research, Schroen and colleagues (1) point out that approximately one third of the National Cancer Institute (NCI) cooperative group phase III trials in their data set closed due to inadequate accrual, and about one fourth of the trial results were never published.

The discussion about using achievement of trial accrual goals as a performance metric has been a vibrant one. Different researchers using different data sets, different trial types, and different definitions of “success” naturally arrive at different magnitudes of the problem, with estimates of oncology trials closing due to insufficient accrual ranging from 22% (2) to 38% (3). Although the metric of publication of results has not been subjected to the same debate, results for nonpublication range from 9.7% (4) to 41% (5), again using different data sets. There are various interpretations of these findings, but it is generally agreed that there is a strong need to improve the performance and productivity of cancer clinical trials.

If such metrics are to be useful in evaluating the performance of oncology research, then clearly it is time for a concerted discussion about which standards should be applied, and which elements of the problem they should address. One approach would be to follow the method that the Operational Efficiency Working Group (OEWG) of the NCI accomplished with respect to time to open oncology trials (6). This effort brought together more than 60 individuals from the NCI, cooperative groups, cancer centers, and the NCI Cancer Therapy Evaluation Program, including statisticians and patient advocates, to focus on the issue of compressing the timeline to activation of clinical trials in cancer. The OEWG created standards and performance metrics that are currently being used to evaluate cooperative-group trials. Additionally, it established a specific consequence for trials that do not achieve the development timeline performance metric. Imagine what the impact would be if such standards could be generated for the issues of trial accrual performance or publication rates.

Let us consider one example: What should be the standard definition of achievement of accrual success? Schroen and colleagues (1) define accrual sufficiency based on evidence of addressing the primary endpoint through resulting publications. Korn and colleagues (2) use a threshold of ≥90% of the target accrual. Cheng and colleagues (3) apply yet a different metric of 100% of minimum expected target accrual defined at the time of study inception, assuming that statisticians use this expected sample size to support the power of the study. Clearly there is a need for better performance, but we must agree on a uniform definition before we can establish a benchmark to assess the efficiency of oncology clinical trials.

On a related note, with the flattening of the NIH budget since 2004, efficiency has become increasingly important to the entire cancer clinical trial system. As highlighted in a report by the Institute of Medicine (7), we have a wealth of opportunities to improve the clinical trial process with respect to such issues as streamlining trial opening, using innovative trial designs, and improving the completion of trials. These recommendations focus on what management researchers consider to be “doing things right,” i.e., the efficient use of resources.

However, there is one recommendation in that report that has been understudied, namely, prioritization and selection of clinical trials. Again, using a management term, this is considered effectiveness or “doing the right things.” As the oncology community braces for major changes resulting from the consolidation of cooperative-group programs, the tsunami of potentially available data from biorepositories and biomarker/genetic libraries, the use of adaptive trial designs, and the implications of the upcoming national health care reform, it will be increasingly critical to measure the efficiency and effectiveness of clinical research to sustain the current progress of cancer research.

Like the complex discussions about efficiency metrics, a discussion about effectiveness metrics will be lively. How should the portfolio of trials be balanced relative to cancer incidences, mortality rates, cancer severity, and relative quality of life? How should rare cancers be apportioned in an era of personalized medicine, when every cancer might be considered “rare” due to biomarker identification? Should early-phase trials be prioritized within cancer types with few treatment options, and larger, late-phase trials be carried out for cancers with larger patient populations and multiple treatment options? And, linking efficiency metrics with effectiveness metrics, should we match geographic cancer demographics to target specific types of trials in order to improve the likelihood of accrual success? Fundamentally, how do we know if we are effectively doing the appropriate clinical research to accelerate the pace of change in the right direction?

Although discussions about these issues will be difficult and complicated, it is time for assessment and alignment of the entire portfolio of clinical trials that are being funded by governmental sources. Defining areas for both incremental process and radical improvements in cancer should encompass the collective factors of patient needs, scientific discoveries, strategic direction of clinical research, and the desired portfolio of clinical trials.

When developing a portfolio of clinical trials, it is important to note that by its very nature, a portfolio is a mix of different trial types, characteristics, and potential impacts. In one standard product-development matrix approach (8), a portfolio can be divided into breakthrough projects, platform projects, enhancement/derivative projects, and sustaining projects (Fig. 1). In the realm of oncology clinical trials, for example, a breakthrough trial (project) would focus on a totally novel drug, involving a novel pathway, that could dramatically change how people with a particular type of cancer are treated. A platform trial (project) could be one in which a drug that has been successfully used for one type of cancer is evaluated for use in a different type of cancer. An enhancement/derivative trial (project) could be a phase II trial that includes additional targeted biomarker screening to tailor cancer therapies and investigate outcomes. Finally, a sustaining trial would be one that focuses on fine-tuning dosages or treatment cycles.

Figure 1.

Clinical trial portfolio matrix.

Figure 1.

Clinical trial portfolio matrix.

Close modal

Although the exact mix of trials that should be undertaken is intricate and fluid, it is important to attempt such an appraisal. We could then use the portfolio metrics of effectiveness to evaluate the entire research portfolio, which would allow us to understand the relative progress and productivity of the clinical research in question. These metrics will be useful when we consider how the national investment in clinical research aligns with the cancer burden across different cancer types, geographic disparities, and trends of longevity and quality of life.

It is important to achieve both efficiency and effectiveness simultaneously. Efficient completion of clinical trials that result only in minor, nonsustainable incremental advancements might do little to further the overall progress in the search for improved cancer therapies. Conversely, clinical trials that are potentially paradigm-shifting can be fruitless and frustrating if they are continually obstructed by operational barriers (9). What we need are agreed-upon efficiency and effectiveness measures that will allow us to focus our limited resources on achieving the greatest return for the collective efforts of the oncology clinical research community.

No potential conflicts of interest were disclosed.

1.
Schroen
AT
,
Petroni
GR
,
Wang
H
,
Thielen
MJ
,
Gray
R
,
Benedetti
J
, et al
Achieving sufficient accrual to address the primary endpoint in phase III clinical trials from U.S. cooperative oncology groups
.
Clin Cancer Res
2012
;
18
:
256
62
.
2.
Korn
EL
,
Freidlin
B
,
Mooney
M
,
Abrams
JS
. 
Accrual experience of National Cancer Institute cooperative group phase III trials activated from 2000 to 2007
.
J Clin Oncol
2010
;
28
:
5197
201
.
3.
Cheng
SK
,
Dietrich
MS
,
Dilts
DM
. 
A sense of urgency: evaluating the link between clinical trial development time and the accrual performance of cancer therapy evaluation program (NCI-CTEP) sponsored studies
.
Clin Cancer Res
2010
;
16
:
5557
63
.
4.
Tam
VC
,
Tannock
IF
,
Massey
C
,
Rauw
J
,
Krzyzanowska
MK
. 
Compendium of unpublished phase III trials in oncology: characteristics and impact on clinical practice
.
J Clin Oncol
2011
;
29
:
3133
9
.
5.
Ramsey
S
,
Scoggins
J
. 
Commentary: practicing on the tip of an information iceberg? Evidence of underpublication of registered clinical trials in oncology
.
Oncologist
2008
;
13
:
925
9
.
6.
National Cancer Institute
. 
Report of the Operational Efficiency Working Group of the Clinical Trials and Translational Research Advisory Committee, Compressing the Timeline for Cancer Clinical Trial Activation
. March 
2010
.
Available from
: http://ccct.cancer.gov/files/OEWG-Report.pdf
7.
Institute of Medicine
. 
A national cancer clinical trials system for the 21st century: reinvigorating the NCI Cooperative Group Program
.
Washington, DC
:
Institute of Medicine of the National Academies
; 
2010
.
8.
Wheelwright
SC
,
Clark
KB
. 
Revolutionizing product development: quantum leaps in speed, efficiency, and quality
.
New York
:
Free Press
; 
1992
.
9.
Dilts
DM
,
Sandler
AB
. 
Invisible barriers to clinical trials: the impact of structural, infrastructural, and procedural barriers to opening oncology clinical trials
.
J Clin Oncol
2006
;
24
:
4545
52
.

Supplementary data