Purpose: The Institute of Medicine report on cooperative groups and the National Cancer Institute (NCI) report from the Operational Efficiency Working Group both recommend changes to the processes for opening a clinical trial. This article provides evidence for the need for such changes by completing the first comprehensive review of all the time and steps required to open a phase III oncology clinical trial and discusses the effect of time to protocol activation on subject accrual.

Methods: The Dilts and Sandler method was used at four cancer centers, two cooperative groups, and the NCI Cancer Therapy Evaluation Program. Accrual data were also collected.

Results: Opening a phase III cooperative group therapeutic trial requires 769 steps, 36 approvals, and a median of approximately 2.5 years from formal concept review to study opening. Time to activation at one group ranged from 435 to 1,604 days, and time to open at one cancer center ranged from 21 to 836 days. At centers, group trials are significantly more likely to have zero accruals (38.8%) than nongroup trials (20.6%; P < 0.0001). Of the closed NCI Cancer Therapy Evaluation Program–approved phase III clinical trials from 2000 to 2007, 39.1% resulted in <21 accruals.

Conclusions: The length, variability, and low accrual results demonstrate the need for the NCI clinical trials system to be reengineered. Improvements will be of only limited effectiveness if done in isolation; there is a need to return to the collaborative spirit with all parties creating an efficient and effective system. Recommendations put forth by the Institute of Medicine and Operational Efficiency Working Group reports, if implemented, will aid this renewal. Clin Cancer Res; 16(22); 5381–9. ©2010 AACR.

Commentary on Cheng et al., p. 5557

The Institute of Medicine (IOM) report entitled “A National Cancer Clinical Trials System for the 21st Century” addresses four overarching goals (1). Two of those goals directly relate to clinical trial operational issues: improving the speed and efficiency of the design, activation, opening, and conduct of clinical trials (goal I) and improving the means of prioritization, selection, support, and completion of cancer clinical trials (goal III) Additionally, the report accepted the recommendations of the National Cancer Institute (NCI) Operational Efficiency Working Group (OEWG) report, which includes specific timelines and milestones for performance in the opening of oncology clinical trials. This article presents a comprehensive review of the evidence demonstrating the need for change in the existing NCI clinical trials system.

The focus of this research is primarily on phase III clinical trials. It can be argued that phase III clinical trials have the highest likelihood of being practice-changing as they can be used as the basis for the implementation of new treatment programs (2). Because of their complex nature and accrual requirements, almost all non–industry-sponsored phase III clinical trials in the United States are designed and activated through the NCI cooperative group mechanism before they are opened at a cancer center. For clarity, we use the term “activate” to refer to the release of an approved protocol by a cooperative group to the oncology community and “open” to signify when the protocol has received local site Institutional Review Board (IRB) approval and has completed all other tasks required for the trial to be made available for patient accrual at that site.

The design and activation of phase III clinical trials requires an extensive development process, with multiple levels of internal oversight within a cooperative group and external oversight by organizations such as the NCI Cancer Therapy Evaluation Program (CTEP), the Food and Drug Administration, and the pharmaceutical company that has provided the investigational agent for the study. After activation by the cooperative group, the trial must then be evaluated locally at a site, adding additional layers of oversight and review. Whereas the NCI cooperative group program has made major advances in the treatment of cancer, this program has been heavily scrutinized almost since its inception in 1955, with the first concern over its effectiveness voiced in 1959 (3).

While such scrutiny is not novel, what is generally missing from such discussions is a detailed, evidence-based evaluation of the steps required for an idea to transit through the system and of the operational or “invisible” barriers faced by all parties in the design of phase III clinical trials (4). Pieces of this puzzle have been presented separately (47), but this article presents the first complete picture of the steps and time required to develop and open a phase III clinical trial that examines all participants in the process. It also highlights the barriers pervasive across the entire system.

Additionally, a nonclinical cancer researcher might hypothesize that trials that have received such extensive review by multiple parties should have a high oncology research community acceptance and, therefore, have a high likelihood of achieving their accrual goals. We present data showing that the opposite is the case: At individual cancer centers, there is a higher likelihood of zero accrual to a cooperative group trial than to non-cooperative group trials. Further, we present data showing that for NCI-CTEP–approved trials closed in an 8-year period (2000–2007), about 6% of all resulted in zero accruals. Finally, we offer a discussion of the implications of this long development process as it relates to accrual performance.

To understand the oncology clinical trial development process fully, research was conducted at a total of eight institutions involved in the development or implementation of oncology clinical trials in the United States. The sites investigated were two NCI-supported cooperative groups [Cancer and Leukemia Group-B (CALGB) and Eastern Cooperative Oncology Group (ECOG)], four NCI-designated comprehensive cancer centers (Fox Chase Cancer Center, The Ohio State University Comprehensive Cancer Center, University of North Carolina-Lineberger Comprehensive Cancer Center, and Vanderbilt-Ingram Cancer Center), the NCI CTEP, and the Central Institutional Review Board (CIRB) supported by the NCI. It is important to note that the cancer centers were not randomly selected; they were chosen because of their excellent scores on the clinical research management portion of their respective Cancer Center Support Grant applications.

For each organization, the Dilts and Sandler (4) case study and process map method was followed. An interdisciplinary team of experts from schools of medicine, engineering, and management collected data from multiple sources through (a) extensive staff interviews, (b) analysis of existing process documentation and records, (c) archival analysis of clinical trial initiation data, and (d) electronic mail and database records. Personal interviews involved questions targeting the research objectives of identifying processes and barriers to the opening of clinical trials. Archival data collection from written and electronic sources was chosen because (a) such documentation was broad and permitted multiple time frames, settings, and events, and (b) archival records are more precise and have less reporting and selectivity biases than an individual's recollection of steps. This process allows for triangulation among the collection of descriptive data (what was said was being done as uncovered in the interviews), normative data (what the policies and procedures documentation indicated should be done), and archival data (what the clinical trial record review showed was actually done). This three-part method resulted in capturing a complete understanding of the development process structure. The end result of this effort was validated process map and timing data. The method was replicated at multiple institutions to ensure data consistency across sites. Additional, independent data were collected from the CTEP for closed studies during the 8-year period from 2000 to 2007. This study was approved by the Vanderbilt University IRB (IRB #060602) and supported by the NCI and Vanderbilt University.

The research has three primary outcome metrics: (a) process mapping, (b) timing analysis, and (c) accrual performance.

Process mapping

This metric identified and mapped existing process steps required to activate or open an oncology clinical trial. The total development time is defined as beginning with the first formal submission of a concept or a letter of intent (LOI) to a cooperative group and ending with the opening of the study for patient accrual at a cancer center. These data were collected by means of more than 30 onsite sessions across more than 100 personnel interviews, additional follow-up e-mail correspondence, and a series of at least two clarification teleconferences each with members of the cooperative groups, CTEP, and the comprehensive cancer centers. Informal clarification interview sessions were conducted at each site to ensure that the data and identified processes were accurate prior to finalization of results.

Steps and timing were cross-validated multiple times at different levels of institutional hierarchy. The interviews were conducted in both individual and group settings, with the input remaining anonymous. The interview process included both open- and closed-ended questions. On completion, the interviewees were asked to clarify specific acronyms, decision point names, position titles, and responsibilities. Objective data stored in databases or e-mails were cross-checked whenever possible with other independent records. Final validation of the process was conducted at each site with all involved participants to ensure that the process was captured correctly and that the interfaces among the participants were accurate.

Timing analysis

Once the process map was complete and verified for accuracy, the calendar days needed for each of the major steps in the process map were collected. For example, a timing analysis of a single trial at one site required the review of archival data compiled from more than 400 historical e-mail correspondences, 15 file reviews, and validation of timing data through queries to the clinical research management database. Evaluating both the process and the timing of each process allows for an understanding of the maximum yield, or output, that the process can achieve. Identifying this and the identified process variation allows the selection of more effective strategies for process and productivity improvements (8, 9).

Accrual performance

A retrospective analysis of therapeutic trials completely closed to enrollment was conducted at each of the institutions. Data were acquired from the various databases that were implemented at each of the institutions. Random clinical trials within the cohort were selected and checked for consistency against physical records.

Results are divided into four segments: (a) number of steps, loops, and groups involved in activating and opening a phase III NCI-approved therapeutic clinical trial; (b) time to activate and open such studies; (c) accruals to all phases of cancer center and CTEP-approved therapeutic studies; and (d) the relationship between time to activate or open and likelihood of eventual accrual success.

Chutes and ladders

Using a representative cooperative group and comprehensive cancer center, activation of a phase III cooperative group trial requires a minimum of 458 processing steps in a cooperative group and 216 at the CTEP/CIRB, with an additional 95 needed at a comprehensive cancer center to open the trial after activation, for a total of at least 769 processing steps (Table 1). Processing steps have two types: work steps (i.e., those that require activities to be performed) and decision points (i.e., steps that require different routing of the study). There are two important caveats for these data. First, there is some “double-counting” of steps; for example, when one organization's process step requires that a proposed trial be sent to a different organization, the receipt of this trial will be counted as a processing step in the receiving organization. This results in a small degree of overstatement of steps.

Table 1.

Process steps, potential loops, and number of stakeholders involved in activating and opening a phase III cooperative group trial

Cooperative groups*CTEP and CIRBCancer centers*Total
Process steps ≥458 ≥216 ≥95 ≥769 
    Working steps ≥399 ≥179 ≥73 ≥651 
    Decision points 59 37 22 118 
Potential loops 26 15 49 
No. of stakeholders involved 11 14 11 36 
Cooperative groups*CTEP and CIRBCancer centers*Total
Process steps ≥458 ≥216 ≥95 ≥769 
    Working steps ≥399 ≥179 ≥73 ≥651 
    Decision points 59 37 22 118 
Potential loops 26 15 49 
No. of stakeholders involved 11 14 11 36 

*A representative cooperative group and comprehensive cancer center were used in calculating the number of process steps, potential loops, and stakeholders.

Process steps denote the minimum number of steps required to complete the development of a clinical trial. The actual number of clinic process steps may be greater depending upon the outcomes of the decision points and the number of loops.

Second, it is possible in some parts of the process to bypass or repeat steps. Using the children's game “chutes and ladders” as a comparison, such a bypass represents a ladder. For example, if a site accepted the results of a CIRB, this would represent a ladder where the local IRB steps could be skipped. Another example would be a study that had expedited review, thus bypassing the full-board review steps. However, it is much more likely that the 769 steps identified are an accurate estimate of the minimum number of steps because virtually all trials are returned for reprocessing sometime during their design. There are a total of 49 points, termed “loops” or chutes, where a trial may be returned to an earlier part of the process flow. For example, if an IRB review requires a change in the science of a study, the proposed protocol must be returned to repeat some of the steps of scientific review.

Neither chutes nor ladders are inherently good or bad. From a positive perspective, a chute that requires a reevaluation of a protocol because of a major scientific or safety flaw is a good thing. However, if this rework is repeated multiple times, it is a symptom of a deeper underlying flaw with either the protocol or the protocol design system itself. The OEWG report determined that for cooperative group phase III trials activated between 2006 and 2008, nearly all (97%) required two or more revision loops and 34% required four or more revisions (p. 19).

Figure 1 shows the back-and-forth nature of the process for one representative phase III trial. This trial was selected because it had the most complete set of timing data. The horizontal rows on the figure are swim lanes and capture those steps undertaken by the study chair, cooperative group, CTEP, and CIRB, respectively. Beginning on the top left, it is straightforward to follow the steps of the design of the concept and protocol as it flows through the activities of the various participants.

Fig. 1.

Process flow and calendar days among stakeholders for one phase III cooperative group trial.

Fig. 1.

Process flow and calendar days among stakeholders for one phase III cooperative group trial.

Close modal

A listing of the various participants is shown in Table 2. A total of 36 separate groups or individuals may be required to approve a study before it is open to patient accrual. More importantly, there is significant overlap across the scope of reviews for scientific, data management, safety/ethics, regulatory, contracts/grants, and study start-up—each of which has the potential to experience a chute to a former step to obtain complete consensus across all reviewing entities.

Table 2.

Reviews required to activate and open a phase III cooperative group trial

Cooperative groupCTEPCancer centerCCOP/affiliatesOthers
Scientific review Disease site committee Steering/CRM Protocol review Feasibility review Industry sponsor 
Executive committee PRC  Site surveys  
Protocol reviews (2-4) CTEP final    
Data management CRF reviews (2-4) CDE review    
Database review     
Safety/ethics Informed consent  Local IRB Informed consent CIRB 
Regulatory Regulatory review PMB review   FDA 
  RAB review   
Contracts/grants Budget    Industry sponsor 
Language     
Study start-up Start-up review  Start-up review Start-up review  
Cooperative groupCTEPCancer centerCCOP/affiliatesOthers
Scientific review Disease site committee Steering/CRM Protocol review Feasibility review Industry sponsor 
Executive committee PRC  Site surveys  
Protocol reviews (2-4) CTEP final    
Data management CRF reviews (2-4) CDE review    
Database review     
Safety/ethics Informed consent  Local IRB Informed consent CIRB 
Regulatory Regulatory review PMB review   FDA 
  RAB review   
Contracts/grants Budget    Industry sponsor 
Language     
Study start-up Start-up review  Start-up review Start-up review  

Abbreviations: CCOP, Community Clinical Oncology Program; CDE, Common Data Element; CIRB, Central Institutional Board Review, CRF, Case Report Form; CRM, Concept Review Meeting; CTEP, Cancer Therapy Evaluation Program; FDA, Food and Drug Administration; PMB, Pharmaceutical Management Branch, PRC, Protocol Review Committee; RAB, Regulatory Affairs Branch.

Development time and variation in the development process

The time from submission of initial concept/LOI to study activation and opening is difficult to measure because although the opening date of the study is easily known, capturing the exact time when a concept is formalized can be problematic. The earliest formal measurable date was selected, usually the initial concept/LOI proposal sent by a principal investigator to a cooperative group study committee. A total of 41 trials were evaluated (13 for CALGB and 28 for ECOG). Two comprehensive cancer centers supplied complete historic data with respect to time (n = 58 and n = 178) and two supplied small samples of studies (n = 3 and n = 4; Table 3). The median time for CTEP and cooperative group processing is approximately 800 calendar days (CALGB, 784 days; ECOG, 808 days), with ranges of 537 to 1,130 and 435 to 1,604 days, respectively. For cancer centers, the median calendar time is approximately 120 days, with a range of 21 to 836 days, with cancer center B being considered an outlier because of its small sample size. These dates are cumulative as cancer centers cannot begin processing a phase III trial until it is activated by a cooperative group. The median calendar time from initial formal concept/LOI to opening for the first patient is therefore approximately 920 days, or nearly 2.5 years. However, the range is 456 to 2,440 calendar days, or 1.25 to 6.7 years. To put these dates in perspective, this median time is longer than it took to design, conduct, and publish the first CALGB trial (10).

Table 3.

Activation and opening times at cooperative groups and cancer centers

nCalendar days
Median (min-max)
Cooperative group* 
    CALGB 13 784 (537–1,130) 
    ECOG 28 808 (435–1,604) 
Cancer center 
    Cancer center A 58 120 (27–657) 
    Cancer center B 252 (139–315) 
    Cancer center C 122 (81–179) 
    Cancer center D 178 116 (21–836) 
nCalendar days
Median (min-max)
Cooperative group* 
    CALGB 13 784 (537–1,130) 
    ECOG 28 808 (435–1,604) 
Cancer center 
    Cancer center A 58 120 (27–657) 
    Cancer center B 252 (139–315) 
    Cancer center C 122 (81–179) 
    Cancer center D 178 116 (21–836) 

*Cooperative group development time is the number of calendar days from the time cooperative group receives a LOI/concept from the study chair until the study is centrally activated.

Cancer center development time is the number of calendar days from the day a local principal investigator submits the trial for consideration until the study is open to patient enrollment at the local site.

Cancer centers B and C only provided samples of their portfolio of studies.

One major issue related to the development of clinical trials is that they require the coordination and cooperation of multiple participants. For example, designing and activating a phase III cooperative group therapeutic nonpediatric trial requires evaluation and coordination between a cooperative group and the CTEP. Multiple reviews by both parties can create a natural tension where one organization is waiting for the other organization to respond to a change. In its worst form, such tension can lead to blaming most of the start-up delay on the other organization's inefficiency. To investigate this issue, 28 studies activated between 2000 and 2005 were evaluated in depth with respect to time (Table 4). Interestingly, as can be seen from the last column in the table, no specific review or party consistently required longer time. Thus, the issue is not how to repair a single element of the system; rather, it is how to reengineer the entire process to reduce rework while simultaneously increasing the safety, quality, and performance of the trial.

Table 4.

Calendar days of review by different stakeholders requiring response for phase III cooperative group trials activated from 2000 to 2005

Reviewern*CTEP/CIRB review timeCooperative group response timeTime difference (coop minus CTEP)
Median (range)Median (range)
Concept review 
    Concept review meeting CTEP 14 60 (15–104) 72 (1–368) 12 
    Concept evaluation panel CTEP 48 (19–66) 36 (22–84) −12 
    Concept rereview CTEP 6 (1–6) 17 (1–56) 11 
Protocol review 
    Protocol review committee CTEP 33 32 (5–69) 32 (1–188) 
    Protocol rereview CTEP 22 8 (1–85) 9 (1–266) 
CIRB review 
    CIRB review CIRB 43 29 (5–55) 21 (2–83) −8 
    Protocol rereview after CIRB approval CTEP 19 12 (1–32) 17 (1–140) 
Amendments prior to activation 
    Protocol rereview CTEP 9 (1–17) 5 (5–6) −4 
    CIRB review CIRB 10 12 (2–34) 30 (3–67) 18 
    Protocol rereview after CIRB approval CTEP 1 (1–1) 22 (22–22) 21 
Reviewern*CTEP/CIRB review timeCooperative group response timeTime difference (coop minus CTEP)
Median (range)Median (range)
Concept review 
    Concept review meeting CTEP 14 60 (15–104) 72 (1–368) 12 
    Concept evaluation panel CTEP 48 (19–66) 36 (22–84) −12 
    Concept rereview CTEP 6 (1–6) 17 (1–56) 11 
Protocol review 
    Protocol review committee CTEP 33 32 (5–69) 32 (1–188) 
    Protocol rereview CTEP 22 8 (1–85) 9 (1–266) 
CIRB review 
    CIRB review CIRB 43 29 (5–55) 21 (2–83) −8 
    Protocol rereview after CIRB approval CTEP 19 12 (1–32) 17 (1–140) 
Amendments prior to activation 
    Protocol rereview CTEP 9 (1–17) 5 (5–6) −4 
    CIRB review CIRB 10 12 (2–34) 30 (3–67) 18 
    Protocol rereview after CIRB approval CTEP 1 (1–1) 22 (22–22) 21 

*Sample size consists of the number of reviews that were required for the 28 clinical trials followed in the sample.

The concept evaluation panel was designed to provide a mechanism for quick feedback on new concepts. This process was rolled out during the sampling period—concepts for trials were either reviewed at the concept review meeting or by the Concept Evaluation Panel.

Trials that required extensive modifications after CIRB review required an additional consensus review by the CTEP.

Low accrual performance at local participating institutions

Completing the arduous development process does not ensure accrual success. Data from the four comprehensive cancer centers highlight the poor accrual performance of cooperative group trials at academic medical centers (Table 5; ref. 11). Accruals at comprehensive cancer centers occurred over a multiyear period, but there was no complete overlap in the time periods evaluated. Overall, 898 trials were in the sample: 394 cooperative group trials and 504 non-cooperative group trials. Of particular interest are those trials that were opened and closed to accrual with zero enrollments. Overall, for cooperative group trials opened at cancer centers, 38.8% of trials resulted in zero accruals, whereas 20.6% of non-cooperative group trials had zero accruals; thus, a cooperative group trial was nearly twice as likely to have zero accrual at a cancer center than a non-cooperative group study (P ≤ 0.001). Expanding the range, 77.4% of cooperative group trials opened at these cancer centers had fewer than 5 patients accrued.

Table 5.

Cancer center accrual to cooperative group trials and non-cooperative group trials

Accruals by sponsor:Cancer center ACancer center BCancer center CCancer center DOverall
Time period:1/2001–7/20051/2000–9/20061/2000–12/20051/2000–4/2007
Cooperative group trials (%) 
 n = 37 n = 166 n = 61 n = 130 n = 394 
27.0 38.6 29.5 46.9 38.8 
1–4 37.8 38.6 39.3 38.5 38.6 
5–10 13.5 13.9 19.7 7.7 12.7 
11–15 8.1 4.2 6.6 0.0 3.6 
16–20 2.7 0.0 1.6 4.6 2.0 
>20 10.8 4.8 3.3 2.3 4.3 
 
Non-cooperative group trials (%) 
 n = 111 n = 157 n = 43 n = 193 n = 504 
18.9 14.6 23.3 25.9 20.6 
1–4 30.6 22.9 9.3 26.4 24.8 
5–10 23.4 18.5 32.6 24.9 23.2 
11–15 12.6 13.4 14.0 7.3 10.9 
16–20 2.7 7.6 7.0 5.7 5.8 
>20 11.7 22.9 14.0 9.8 14.7 
Accruals by sponsor:Cancer center ACancer center BCancer center CCancer center DOverall
Time period:1/2001–7/20051/2000–9/20061/2000–12/20051/2000–4/2007
Cooperative group trials (%) 
 n = 37 n = 166 n = 61 n = 130 n = 394 
27.0 38.6 29.5 46.9 38.8 
1–4 37.8 38.6 39.3 38.5 38.6 
5–10 13.5 13.9 19.7 7.7 12.7 
11–15 8.1 4.2 6.6 0.0 3.6 
16–20 2.7 0.0 1.6 4.6 2.0 
>20 10.8 4.8 3.3 2.3 4.3 
 
Non-cooperative group trials (%) 
 n = 111 n = 157 n = 43 n = 193 n = 504 
18.9 14.6 23.3 25.9 20.6 
1–4 30.6 22.9 9.3 26.4 24.8 
5–10 23.4 18.5 32.6 24.9 23.2 
11–15 12.6 13.4 14.0 7.3 10.9 
16–20 2.7 7.6 7.0 5.7 5.8 
>20 11.7 22.9 14.0 9.8 14.7 

One argument that could be made to justify these results is that, because cooperative group trials are opened at multiple sites, it is reasonable to have low accruals at any individual site because of the high number of sites contributing to the single accrual goal. Because it is difficult to understand why any cancer center would open such a high number of adult cancer trials if they expect zero accruals, we decided to investigate this question from a more global perspective, that is, looking at the aggregate, national accruals to such trials. Investigating accruals for all CTEP-approved clinical trials completely closed to accrual from 2000 to 2007 showed that of the 911 trials of all phases in the sample period, 6.4% resulted in zero accruals (Table 6). With respect to phase III trials, 39.1% of the trials closed to accrual with 20 or fewer enrollments.

Table 6.

Accrual to NCI-CTEP–approved clinical trials closed in 2000–2007

Phase I (%)Phase I/II (%)Phase II (%)Phase III (%)Other (%)Pilot studies (%)Total (%)
n = 186n = 73n = 549n = 41n = 34n = 28n = 911
5.9 8.2 4.3 9.8 29.5 10.7 6.4 
1–4 9.1 12.3 4.3 7.3 8.9 0.0 6.0 
5–10 12.4 4.1 7.3 9.8 2.9 10.7 8.1 
11–15 11.8 11.0 7.1 2.4 2.9 7.1 8.0 
16–20 8.6 5.5 9.5 9.8 2.9 17.9 9.0 
>20 52.2 58.9 67.5 60.9 52.9 53.6 62.5 
Phase I (%)Phase I/II (%)Phase II (%)Phase III (%)Other (%)Pilot studies (%)Total (%)
n = 186n = 73n = 549n = 41n = 34n = 28n = 911
5.9 8.2 4.3 9.8 29.5 10.7 6.4 
1–4 9.1 12.3 4.3 7.3 8.9 0.0 6.0 
5–10 12.4 4.1 7.3 9.8 2.9 10.7 8.1 
11–15 11.8 11.0 7.1 2.4 2.9 7.1 8.0 
16–20 8.6 5.5 9.5 9.8 2.9 17.9 9.0 
>20 52.2 58.9 67.5 60.9 52.9 53.6 62.5 

The impact of long development times

The relationship between accruals and development time was also investigated for phase III cooperative group trials closed to patient enrollment at one cooperative group between January 2000 and July 2006 (n = 15). Figure 2 highlights the relationship between development time and accrual. Two of the trials were stopped because of scientific/safety reasons. Of the trials that required less than the median development time (n = 7; shown in black), five achieved more than their expected minimum accrual by study closure, and one achieved 94% of its goal. On the other hand, of those trials with development time greater than the median (n = 6), only one achieved greater than 90% of the expected accrual by study closure. The remaining trials closed with less than 15% of expected accrual. This relationship is not limited to cooperative group trials—additional analysis on a more complete data set from the CTEP, the results of which are published in this issue of CCR, (ref. 12; and see article by Cheng et al., page 5557) shows a strong negative statistical relationship between achievement of accrual goals and development time.

Fig. 2.

Accrual performance of phase III trials closed to accrual stratified by development time (n = 15). Phase III cooperative group trials activated and closed to accrual between January 2000 and July 2006 at one cooperative group. Clinical trial study numbers are ordered by final accrual rate at time of study closure. The colors of the bars denote development time (with respect to the overall median development time—808 calendar days). Black bars, development time less than or equal to the median development time; red bars, development time greater than the median development time; gray bars, trials that were closed due to reasons other than poor accrual. Total accrual was the minimum expected accrual to each study.

Fig. 2.

Accrual performance of phase III trials closed to accrual stratified by development time (n = 15). Phase III cooperative group trials activated and closed to accrual between January 2000 and July 2006 at one cooperative group. Clinical trial study numbers are ordered by final accrual rate at time of study closure. The colors of the bars denote development time (with respect to the overall median development time—808 calendar days). Black bars, development time less than or equal to the median development time; red bars, development time greater than the median development time; gray bars, trials that were closed due to reasons other than poor accrual. Total accrual was the minimum expected accrual to each study.

Close modal

The most recent call for reinvigorating the cooperative group system was issued in April 2010 with the publication of the IOM report “A National Cancer Clinical Trials System for the 21st Century” (1). Although the results and the recommendations from this report may be controversial, the problems it outlines as well as its recommendations are not unique. Thirteen years ago, in 1997, an NCI report concluded, “The clinical trials methodologies used by the 11 cooperative groups and 51 cancer centers have created a system described as a ‘Tower of Babel’ (1). The problem persists today as evidenced by the summary results presented here and in recent comments alluding to the process of developing clinical trials as the “Ordinary Miracle of Cancer Clinical Trials” (13). Additional evidence of the problem can be seen in three additional studies. Go and colleagues (14), investigating a total of 495 ECOG phase II and III trials from 1977 to 2006, found that 27% of trials failed to complete accrual. Schroen and colleagues (15), in a study of 248 phase III trials open from 1993 to 2002 from five cooperative groups, discovered that 35% were closed because of inadequate accrual. Finally, Wang-Gillam and colleagues (16), in a cross-country study, found that the median time from submittal to opening for a phase II industry-sponsored thoracic oncology clinical trial was 239.5 days at a U.S. institution and 112.5 days at an Italian institution.

Improvements to both the quality of the processes and to the time required to develop clinical trials should not be done in isolation. There is a need to rekindle the collaborative spirit of the cooperative group program. As is well stated in the IOM report, such collaboration would include uniformity of back-office, support functions while simultaneously not impairing a group's ability to be creative with respect to developing clinical trials around interesting and important scientific questions. Additionally, a general philosophy of facilitation rather than primarily oversight should be fostered throughout the NCI, cooperative groups, and cancer centers.

Clearly, it is time to go beyond mere recommendations and begin the actual execution of changes in the existing system. Some changes are beginning with efforts such as the timelines developed and implemented by the NCI OEWG (17). However, this represents only one of many possible steps—and involves only some of the many participants that need to change—if the system of clinical trial development as a whole is to be reinvigorated.

We would be remiss if we did not address the other elephant in the room: financial resources. Financial incentives, or what is better characterized as removal of financial disincentives, for investigators and centers are a critical aspect of improving the cooperative group program. In 2004, the estimated median per-patient site cost for a phase III cooperative group trial was $5,000 (18) to $6,000 (19), well above the per-patient reimbursement rate of $2,000, which has remained constant since 1999 (20). In 2008, the NCI increased the rate of reimbursement for a small minority of complex trials by an additional $1,000 (21). However, even this increased rate is not adequate to completely cover all labor costs, per-subject enrollment costs, and additional research-related paperwork and reporting requirements (22). The response by the cooperative group member sites has been dramatic, with 42% planning or considering limiting accruals to cooperative group trials (23) because of per-case reimbursement or staffing issues.

However, this does not imply that the call for increasing the cooperative group budgets will in itself successfully transform the existing system of cancer research. Recall that the recommendations from the 1997 report occurred while funding was increasing for cooperative groups. Simply adding additional resources to a system in need of repair does not ensure better performance; rather, it is time to revamp the entire system with a focus on effectiveness, efficiency, and financial stability.

The findings presented in this review demonstrate the “chutes and ladders” of the existing system: Some aspects of a concept or protocol are able to bypass some process steps (a ladder), but as shown in the OEWG report, virtually all studies are reprocessed and revised (a chute), sometimes up to six times before proceeding. Although many of these chutes and ladders can be within the same organization, most are the result of interactions between organizations; thus, it is the system that needs to change. It is critical that more ladders be built and more chutes be eliminated in the entire system to achieve the overarching goal of discovering better ways to prevent, control, and cure cancer.

No potential conflicts of interest were disclosed.

1
IOM (Institute of Medicine)
.
A National Cancer Clinical Trials System for the 21st century: reinvigorating the NCI Cooperative Group Program
.
Washington (DC)
:
The National Academies Press
; 
2010
.
2
Djulbegovic
B
,
Kumar
A
,
Soares
HP
, et al
. 
Treatment success in cancer: new cancer treatment successes identified in phase 3 randomized controlled trials conducted by the National Cancer Institute-sponsored cooperative oncology groups, 1955 to 2006
.
Arch Intern Med
2008
;
168
:
632
42
.
3
Gellhorn
A
. 
Invited remarks on the current state of research in clinical cancer
.
Cancer Chemother Rep
1959
;
5
:
1
12
.
4
Dilts
DM
,
Sandler
AB
. 
Invisible barriers to clinical trials: the impact of structural, infrastructural, and procedural barriers to opening oncology clinical trials
.
J Clin Oncol
2006
;
24
:
454
52
.
5
Dilts
DM
,
Sandler
AB
,
Baker
M
, et al
. 
Processes to activate phase III clinical trials in a cooperative oncology group: the case of the Cancer Leukemia Group B (CALGB)
.
J Clin Oncol
2006
;
24
:
4553
7
.
6
Dilts
DM
,
Sandler
AB
,
Cheng
S
, et al
. 
The steps and time to process phase III clinical trials at the Cancer Therapy Evaluation Program
.
J Clin Oncol
2009
;
27
:
1761
6
.
7
Dilts
DM
,
Sandler
AB
,
Cheng
S
, et al
. 
Development of clinical trials in a cooperative group setting: the Eastern Cooperative Group
.
Clin Cancer Res
2008
;
14
:
3427
33
.
8
Bohn
R
. 
Noise and learning in semiconductor manufacturing
.
Manage Sci
1995
;
41
:
31
42
.
9
Field
J
,
Sinha
K
. 
Applying process knowledge for yield variation reduction: a longitudinal field study
.
Decision Sciences
2005
;
36
:
159
86
.
10
Schlisky
RL
. 
Personalizing cancer care: American Society of Clinical Oncology presidential address 2009
.
J Clin Oncol
2009
;
27
:
3725
30
.
11
Dilts
DM
,
Sandler
AB
,
Cheng
S
, et al
. 
Accrual to clinical trials at selected comprehensive cancer centers
. J Clin Oncol 2008;26(suppl):abstract 6543.
12
Cheng
S
,
Dietrich
M
,
Finnigan
S
,
Dilts
DM
. 
A sense of urgency: evaluating the link between clinical trial development time and the accrual performance of CTEP-sponsored studies
.
Clin Cancer Res
2010
;
16
:
5557
63
.
13
Steensma
DP
. 
Processes to activate phase III clinical trials in a cooperative oncology group: the elephant is monstrous
.
J Clin Oncol
2007
;
25
:
1148
,
author reply 1149
.
14
Go
RS
,
Meyer
M
,
Mathiason
MA
, et al
. 
Nature and outcome of clinical trials conducted by the Eastern Cooperative Group (ECOG) from 1977 to 2006
. J Clin Oncol 2010;28(suppl):abstract 6069.
15
Schroen
AT
,
Petroni
GR
,
Wang
H
, et al
. 
Challenges to accrual predictions to phase III cancer clinical trials: a survey of study chairs and lead statisticians of 248 NCI-sponsored trials
. J Clin Oncol 2009;27(suppl):abstract 6652.
16
Wang-Gillam
A
,
Williams
K
,
Novello
S
, et al. 
Time to activate lung cancer clinical trials and patient enrollment: a representative comparison study between two academic centers across the Atlantic
.
J Clin Oncol
2010
;
28
:
3803
7
.
17
National Cancer Institute (NCI)
.
Report of the Operational Efficiency Working Group: compressing the timeline for cancer clinical trial activation
.
Bethesda (MD)
:
National Cancer Institute
; 
2010
.
18
Emanuel
E
,
Schnipper
L
,
Kamin
D
,
Levinson
J
,
Lichter
A
. 
The costs of conducting clinical research
.
J Clin Oncol
2003
;
21
:
4145
50
.
19
C-Change
.
A guidance document for implementing effective cancer clinical trials executive summary. Version 1.2
.
Washington (DC)
:
C-Change
; 
2005
.
20
IOM (Institute of Medicine)
.
Multi-center phase III clinical trials and NCI cooperative groups
.
Washington (DC)
:
IOM
; 
2009
.
21
Mooney
M
. 
Cooperative group clinical trials complexity funding: model development and trial selection process
.
Clinical Trials and Translational Research Advisory Committee Meeting
,
Bethesda (MD)
. 
2008
.
22
American Cancer Society Cancer Action Network
.
Barriers to provider participation in clinical cancer trials: Potential policy solutions
.
Washington (DC)
:
American Cancer Society
; 
2009
.
23
Baer
AR
,
Kelly
CA
,
Bruinooge
SS
,
Runowicz
CD
,
Blayney
DW
. 
Challenges to National Cancer Institute-supported cooperative group clinical trial participation: an ASCO survey of cooperative group sites
.
J Oncol Pract
2010
;
6
:
1
4
.