To address the need for clinical investigators in oncology, American Association for Cancer Research (AACR) and American Society for Clinical Oncology (ASCO) established the Methods in Clinical Cancer Research Workshop (MCCRW). The workshop's objectives were to: (i) provide training in the methods, design, and conduct of clinical trials; (ii) ensure that clinical trials met federal and international ethical guidelines; (iii) evaluate the effectiveness of the workshop; and (iv) create networking opportunities for young investigators with mentoring senior faculty. Educational methods included: (i) didactic lectures, (ii) Small Group Discussion Sessions, (iii) Protocol Development Groups, and (iv) one-on-one mentoring. Learning focused on the development of an Institutional Review Board (IRB)-ready protocol, which was submitted on the last day of the workshop. Evaluation methods included: (i) pre- and postworkshop tests, (ii) students' workshop evaluations, (iii) faculty's ratings of protocol development, (iv) students' productivity in clinical research after the workshop, and (v) an independent assessment of the workshop. From 1996 to 2014, 1,932 students from diverse backgrounds attended the workshop. There was a significant improvement in the students' level of knowledge from the pre- to the postworkshop exams (P < 0.001). Across the classes, student evaluations were very favorable. At the end of the workshop, faculty rated 92% to 100% of the students' protocols as ready for IRB submission. Intermediate and long-term follow-ups indicated that more than 92% of students were actively involved in patient-related research, and 66% had implemented five or more protocols. This NCI-sponsored MCCRW has had a major impact on the training of clinicians in their ability to design and implement clinical trials in cancer research.

Although there have been recent advances against the many diseases collectively called cancer, there continues to be an urgent need for the development of new methods of early detection, prevention, and treatment. This need is even more urgent given the fact that a larger number of our population (the “baby boomers”) are now in their cancer-prone years.

Unfortunately, considerable concern has been expressed in the past and in more recent literature that there is a serious shortage of clinical/translational investigators (an “endangered species”) who can successfully design and conduct the clinical trials needed to determine if a particular new therapeutic, early detection, or prevention approach will help patients (1–3).

Since the time of those early reports, there continues to be a serious shortage of translational/clinical investigators who can design and conduct state-of-the-art clinical trials that match innovations in basic science with early drug development (4, 5). There continues to be a need for clinical/translational scientists with the ability to “build bridges across research's Valley of Death” (i.e., clinical applications; refs. 6–10). Simultaneously, there are relatively fewer biostatisticians focused on clinical trials specifically at a time when the methods for such trials have also evolved significantly to meet the challenges of new therapies and diagnostic methods.

For investigators involved in the design and conduct of clinical trials, advances in technology require continued training to take advantage of the latest investigative tools and methods, including molecular and imaging techniques (11–17) both in treatment and in prevention (18). In addition, it is a major challenge for clinical investigators to maintain an understanding of regulatory agencies, such as the FDA and European Medicines Agency (EMA; refs. 19–23), in order to effectively utilize the various mechanisms available for expedited review and approval of novel therapies such as Breakthrough Designation (24–27). Finally, it is necessary for clinicians to understand effective clinical trial design and biostatistical principles, as well as the ethical principles and the need for transparency in disclosure of potential conflicts, in order to effectively communicate with patients regarding clinical trials and patients' willingness to participate in them (28–37).

Although there has been a tremendous increase in the number of new potentially therapeutic agents introduced into clinical trials, a relatively small percentage of these agents produce positive clinical trial results. The reasons for this are multiple but certainly include (i) issues in the preclinical science which supports bringing a potential new therapeutic into clinical trials, and (ii) some design issues in early clinical trials (38, 39). Well-designed and well-implemented clinical trials are needed more than ever, given the limited number of patients eligible for trials, patients' valuable time, the expense, and the time needed to conduct well-designed clinical trials (40, 41).

Unfortunately, the success rate for phase III clinical trials remains unacceptably low (42–47). Given the costs and resources allocated to clinical trials, this is unacceptable from the perspective of cancer patients, clinical researchers, research supporters, and industry. Often not enough is understood in phase I and II trials to be predictive of success (28, 40, 45). Potential remedies include: (i) better science in selecting new therapeutics to take forward into the clinic, (ii) a better understanding of the reasons for the responses of patients in phase I and II trials, (iii) design of new and better phase II trials that are more predictive of phase III success, and (iv) better patient selection so that only those patients with the appropriate tumor molecular genotype/phenotype are entered into the trials.

The high demand for more clinical investigators is further compounded by the challenges of how to provide early-career oncologists with the most efficient and effective type of training in clinical/translational research, and how to afford them the time to devote to the training. Certainly, mentorship plays a major role (48, 49). There are excellent fellowship and training programs in many academic medical centers. However, so much needs to be accomplished to master the increasingly complex care of cancer patients, and little time remains for a concentrated effort in training young oncologists in clinical trial methods (9).

Continued education in the methodology, design, and implementation of clinical trials is needed to address the shortage of well-trained clinical investigators (5, 50, 51). Even well-trained clinical investigators beginning a career in the cancer field are deterred from clinical research by many obstacles including the lack of: funding, institutional support (e.g., training, certification, research time allocation), coordinated clinical research infrastructure, and mentorship (7, 10, 49, 52).

Taking the above background into consideration, the leadership of American Association for Cancer Research (AACR), later joined by American Society for Clinical Oncology (ASCO), felt that they had unique resources—expert investigators, teachers, and mentors—whom they could bring together to ensure that the next generation of clinical investigators received the best training possible. In an attempt to provide an intense period of training, and to set the stage for continued mentoring in the design and conduct of clinical trials, members of AACR and ASCO set out to develop and implement a 1-week training workshop in clinical trial methods (Supplementary Materals S1). To ensure the rigor and quality of peer review needed for such a course, NCI funding was sought for assistance in supporting the workshop after a pilot period. The goal was to have the “Methods in Clinical Cancer Research Workshop” (MCCRW) provide knowledge, tools, mentorship, and peer-to-peer networking to overcome many of the daunting obstacles faced by young clinical investigators.

The initial training grant application was submitted to the NCI in 1996 but was not funded. The application was resubmitted after carefully addressing the critiques, and the first workshop grant was given a high priority for funding by the NCI that same year. Based on the requirement for the venue of the workshop to be relatively isolated, but accessible for major transportation and at a reasonable cost, the first event was held in Park City, Utah, and thereafter during the summer in Vail, Colorado.

Special Attributes of the MCCRW

As a result of several discussions among the organizers, it was decided that the workshop would incorporate the following attributes. (i) Teach the best possible methods for clinical trials so that patients would only be asked to participate in well-designed clinical trials that would answer important questions and move the field forward. (ii) Require the student to produce a “product” at the end of the workshop. This would include a concept sheet, a full protocol, and a patient informed consent form that were ready for Institutional Review Board (IRB) submission at the student's institution. (iii) Conduct the workshop in an environment far from distractions in order to command the student's full attention for as long as a physician could comfortably be away from patient care and other professional responsibilities. (iv) Attract the best possible students through a competitive application process and by offering excellence in teaching. The applicant would need the support of a specific mentor/leader at his/her institution who would commit to the continued mentoring of the applicant. (v) Recruit a multidisciplinary faculty comprising biostatisticians, surgical oncologists, medical oncologists, radiation oncologists, pediatric oncologists, gynecologic oncologists, radiologists, pathologists, pharmacologists, bioethicists, patient advocates, and regulatory affairs experts with extensive experience in the design and conduct of clinical trials. (vi) Maintain a low student/faculty ratio (preferably 2:1). (vii) Utilize sound methods to evaluate the effectiveness of the workshop based on the following outcomes: transfer of knowledge in clinical trial design and methodology, implementation and completion of protocols designed at the workshop, retention of workshop students in patient-oriented research, and productivity of students in clinical trials after the workshop. (viii) Put on display the necessary interactions among and between clinical investigators and their biostatistical colleagues working on trial methodology to provide a novel model of collaborations for beginning clinical investigators.

Specific Aims of the MCCRW

The specific aims of the workshop were to:

  • (i) Provide training to a group of early-career clinical investigators in the methods, design, and conduct of clinical trials, including specialized designs for targeted and immunologic therapies, and methods to determine as rapidly as possible whether or not a particular design approach was effective.

  • (ii) Teach methods that will accelerate the development of better-designed clinical trials that are early predictors of success. These designs would ensure that patients are not asked to participate in clinical trials that will not yield important insights.

  • (iii) Facilitate the development of attendees' preferred networks and mentorships, both between the faculty and students and among the students themselves, and foster those networks.

  • (iv) Evaluate the workshop by the following criteria:

    • (a) Evaluation ratings of the workshop program, activities, and faculty by students and other participants.

    • (b) Knowledge test scores before and after the workshop.

    • (c) Students' progress and advances during the workshop and development and completion of their protocol.

    • (d) Percentage of protocols produced at the workshop that were activated after the workshop.

    • (e) Percentage of students who reported staying in clinical trial research and implemented five or more clinical trials at 1 to 4 years (intermediate follow-up) and 5 years or more (long-term follow-up) after the workshop.

    • (f) Comparisons of publication rates, citation impact, levels of clinical trial participation, collaboration, and collaborative networks of applicants selected versus those not selected for the MCCRW. It was felt that these objective analyses, performed by Thomson Reuters, would complement the follow-up self-report results.

Selection of the Workshop Directors, Faculty, Educational Evaluator, and Candidates

The criteria used for assigning workshop directors were outstanding accomplishments in clinical research and substantial teaching experience. In the first year, AACR and ASCO each selected an outstanding clinical investigator as their representative workshop (course) director. In year 2, a third workshop director, a biostatistician, was added to address the critically important role of biostatisticians in the design and conduct of clinical trials. The biostatistician workshop director was nominated and agreed upon by both organizations. Via a request for proposal, an expert in educational evaluation was also selected and agreed upon by both organizations.

The Executive Committee for the workshop (workshop directors and administrative representatives from AACR and ASCO) met each year to set the curriculum and select the faculty members. In acknowledgment of their importance, patient advocates were added to the faculty when the workshop was in effect for about 5 years. Given the intensity of the workshop, as described in Supplemental Materials S1 and S2, the student to faculty ratio was set at approximately 2:1.

The workshop was announced to oncology training program directors and cancer center directors, and announcements were published by AACR and ASCO in their journals and on their websites. The application required a curriculum vitae, evidence of involvement in clinical research to date, a proposed project (later an initial concept sheet), and a written commitment of the student's sponsor from the home institution for continued mentorship. A Selection Committee, comprising faculty members of the workshop for that year, selected the candidates, with the goal of 75% who were in their second or third year of subspecialty fellowship (or residency in the case of radiation oncologists) and 25% who were junior faculty members (for no more than 5 years).

In addition to the candidates, a limited number of corporate attendees were also invited to participate as observers during the workshop. Beginning in 2011, to increase the pool of qualified biostatisticians for subsequent workshops, a small number of junior biostatisticians was invited as nonfaculty trainees to be mentored by faculty biostatisticians.

Workshop Outcomes

Short-term outcomes were established from different sources. The first was the participants' ratings of the quality of the MCCRW program, activities, and faculty. The second was related to their performance during the workshop, namely (i) the incremental scores from the pre- to the postworkshop tests, (ii) the listing of new concepts and processes that they acquired during the workshop, and (iii) faculty ratings of the students' progress in learning and completing their requisite research protocols in the Protocol Development Groups.

Intermediate and long-term outcomes were also measured. One measure consisted of determining from the students, through follow-up questionnaires one or several years after the workshop, (i) whether their participation in the workshop had been valuable to them in conducting clinical trials and in advancing their careers, (ii) whether they had kept in contact and networked with the faculty, and (iii) whether they had recommended the workshop to other clinical cancer researchers.

A second measure of intermediate and long-term outcomes focused on the students' professional status, activities, and research performance. The latter was derived from two sources. One source was the follow-up questionnaires described above, in which the students were asked to report whether the protocol that they developed at the workshop had been submitted to the IRB, approved, and funded. In addition to these milestones, the students were asked to indicate the following:

  • (i) whether they had completed their training,

  • (ii) their present professional position/title,

  • (iii) the focus and level of their research activity, and

  • (iv) the number of additional protocols they had developed and implemented.

The other source was the independent and objective evaluation commissioned by AACR and ASCO and conducted by Thomson Reuters. The outcomes assessed in that study include research productivity, citation impact, clinical trial involvement, and numbers of collaborations of applicants who were admitted to the workshop compared to those who were not accepted as students in the workshop.

Demographic data over time

Between 1996 and 2014, the number of applicants per year ranged from 142 to 279 (median of 195) with the number of students set at 100 per year (Fig. 1). During this time period, a total of 1,932 students had attended the workshop, a number sufficient for the purposes of definitive follow-up. It is noteworthy that there continues to be a considerable demand for the workshop, which has been quite steady over the entire period of the workshop.

Figure 1.

Plot of numbers of students selected versus number of applicants. The number of applicants and the number of students accepted each year (1996–2014).

Figure 1.

Plot of numbers of students selected versus number of applicants. The number of applicants and the number of students accepted each year (1996–2014).

Close modal

The representation of women and minorities is needed for building clinical investigator capacity and is therefore an important aspect of this training opportunity. Accepted students were diverse, covering a variety of specialties in cancer medicine (Table 1) as well as gender and racial/ethnic categories (Tables 2A and 2B). In addition, the workshop has been attended by students from a broad spectrum of over 280 cancer centers and academic institutions across the country (Supplementary Materials S3). Throughout the years, there have been changes in faculty to reflect additional areas of expertise needed to teach the latest advances in clinical trial design (Supplementary Materials S4), e.g., immune oncology agents, special imaging techniques, and other rapidly evolving disciplines in clinical cancer research.

Table 1.

Specialties of the students (2007–2014).

20072008200920102011201220132014Total
Specialtyn %n %n %n %n %n %n %n %N %
Medical oncology 57 51 56 53 58 46 50 49 420 
 56.4% 51.0% 56.6% 53.0% 58.6% 46.0% 51.5% 49.5% 52.8% 
Radiation oncology 12 12 19 13 14 18 7 12 107 
 11.9% 12.0% 19.2% 13.0% 14.1% 18.0% 7.2% 12.1% 13.5% 
Hematology 9 11 9 14 3 13 12 11 82 
 8.9% 11.0% 9.1% 14.0% 3.0% 13.0% 12.4% 11.1% 10.3% 
Surgical oncology 7 13 6 9 5 3 4 5 52 
 6.9% 13.0% 6.1% 9.0% 5.1% 3.0% 4.1% 5.1% 6.5% 
Pediatric oncology 8 6 4 7 5 3 5 3 41 
 7.9% 6.0% 4.0% 7.0% 5.1% 3.0% 5.2% 3.0% 5.2% 
Gynecology 5 6 5 2 4 6 3 2 33 
 5.0% 6.0% 5.1% 2.0% 4.0% 6.0% 3.1% 2.0% 4.2% 
Neuro-oncology 3 1 0 1 6 4 4 3 22 
 3.0% 1.0% 0.0% 1.0% 6.1% 4.0% 4.1% 3.0% 2.8% 
Phase I studies 0 0 0 0 0 6 4 6 16 
 0.0% 0.0% 0.0% 0.0% 0.0% 6.0% 4.1% 6.1% 2.0% 
Immunology 0 0 0 0 0 0 4 3 7 
 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 4.1% 3.0% 0.9% 
Cutaneous 0 0 0 1 1 0 1 1 4 
 0.0% 0.0% 0.0% 1.0% 1.0% 0.0% 1.0% 1.0% 0.5% 
Other 0 0 0 0 3 1 3 4 11 
 0.0% 0.0% 0.0% 0.0% 3.0% 1.0% 3.1% 4.0% 1.4% 
Total 101 100 99 100 99 100 97 99 795 
20072008200920102011201220132014Total
Specialtyn %n %n %n %n %n %n %n %N %
Medical oncology 57 51 56 53 58 46 50 49 420 
 56.4% 51.0% 56.6% 53.0% 58.6% 46.0% 51.5% 49.5% 52.8% 
Radiation oncology 12 12 19 13 14 18 7 12 107 
 11.9% 12.0% 19.2% 13.0% 14.1% 18.0% 7.2% 12.1% 13.5% 
Hematology 9 11 9 14 3 13 12 11 82 
 8.9% 11.0% 9.1% 14.0% 3.0% 13.0% 12.4% 11.1% 10.3% 
Surgical oncology 7 13 6 9 5 3 4 5 52 
 6.9% 13.0% 6.1% 9.0% 5.1% 3.0% 4.1% 5.1% 6.5% 
Pediatric oncology 8 6 4 7 5 3 5 3 41 
 7.9% 6.0% 4.0% 7.0% 5.1% 3.0% 5.2% 3.0% 5.2% 
Gynecology 5 6 5 2 4 6 3 2 33 
 5.0% 6.0% 5.1% 2.0% 4.0% 6.0% 3.1% 2.0% 4.2% 
Neuro-oncology 3 1 0 1 6 4 4 3 22 
 3.0% 1.0% 0.0% 1.0% 6.1% 4.0% 4.1% 3.0% 2.8% 
Phase I studies 0 0 0 0 0 6 4 6 16 
 0.0% 0.0% 0.0% 0.0% 0.0% 6.0% 4.1% 6.1% 2.0% 
Immunology 0 0 0 0 0 0 4 3 7 
 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 4.1% 3.0% 0.9% 
Cutaneous 0 0 0 1 1 0 1 1 4 
 0.0% 0.0% 0.0% 1.0% 1.0% 0.0% 1.0% 1.0% 0.5% 
Other 0 0 0 0 3 1 3 4 11 
 0.0% 0.0% 0.0% 0.0% 3.0% 1.0% 3.1% 4.0% 1.4% 
Total 101 100 99 100 99 100 97 99 795 

Note: Number of students selecting a given specialty for workshop, years 2007–2014. Specialties of students showed a wide variety and range over the years of the workshop. Categories were self-selected.

Table 2A.

Diversity of students – career status and gender.

YearStudents (N)Students in their fellowshipsPercentage of students in their fellowshipsStudents who are Junior FacultyPercentage of students who are Jr. FacultyMales (n)Percentage of malesFemales (n)Percentage of females
1996 105 a a a a 66 64.10% 39 37.90% 
1997 117 76 65.00% 41 35.00% 71 60.70% 46 39.30% 
1998 104 75 72.10% 29 27.90% 84 80.80% 20 19.20% 
1999 110 78b 70.90% 30b 27.30% 74 67.30% 36 32.70% 
2000 100 79 79.00% 21 21.00% 49 49.00% 51 51.00% 
2001 97 75 70.60% 22 21.10% 58 59.79% 39 40.21% 
2002 100 75 75.00% 25 25.00% 61 61.00% 39 39.00% 
2003 102 76 74.50% 26 25.50% 51 50.00% 51 50.00% 
2004 100 76 76.00% 24 24.00% 53 53.00% 47 47.00% 
2005 101 75 74.30% 26 25.70% 56 55.40% 45 44.60% 
2006 101 76 75.20% 25 24.80% 60 59.40% 41 40.60% 
2007 101 75 74.30% 26 25.70% 56 55.45% 45 44.55% 
2008 100 75 75.00% 25 25.00% 56 56.00% 44 44.00% 
2009 99 74 74.75% 25 25.25% 56 56.57% 43 43.43% 
2010 100 75 75.00% 25 25.00% 53 53.00% 47 47.00% 
2011 99 73 73.74% 26 26.26% 48 48.48% 51 51.52% 
2012 100 77 77.00% 23 23.00% 50 50.00% 50 50.00% 
2013 97 74 76.29% 23 23.71% 45 46.40% 52 53.60% 
2014 99 74 74.75% 25 25.25% 46 46.46% 53 53.54% 
Average 102.3 75.5 74.08% 26 25.36% 57.53 56.47% 44.8 43.64% 
YearStudents (N)Students in their fellowshipsPercentage of students in their fellowshipsStudents who are Junior FacultyPercentage of students who are Jr. FacultyMales (n)Percentage of malesFemales (n)Percentage of females
1996 105 a a a a 66 64.10% 39 37.90% 
1997 117 76 65.00% 41 35.00% 71 60.70% 46 39.30% 
1998 104 75 72.10% 29 27.90% 84 80.80% 20 19.20% 
1999 110 78b 70.90% 30b 27.30% 74 67.30% 36 32.70% 
2000 100 79 79.00% 21 21.00% 49 49.00% 51 51.00% 
2001 97 75 70.60% 22 21.10% 58 59.79% 39 40.21% 
2002 100 75 75.00% 25 25.00% 61 61.00% 39 39.00% 
2003 102 76 74.50% 26 25.50% 51 50.00% 51 50.00% 
2004 100 76 76.00% 24 24.00% 53 53.00% 47 47.00% 
2005 101 75 74.30% 26 25.70% 56 55.40% 45 44.60% 
2006 101 76 75.20% 25 24.80% 60 59.40% 41 40.60% 
2007 101 75 74.30% 26 25.70% 56 55.45% 45 44.55% 
2008 100 75 75.00% 25 25.00% 56 56.00% 44 44.00% 
2009 99 74 74.75% 25 25.25% 56 56.57% 43 43.43% 
2010 100 75 75.00% 25 25.00% 53 53.00% 47 47.00% 
2011 99 73 73.74% 26 26.26% 48 48.48% 51 51.52% 
2012 100 77 77.00% 23 23.00% 50 50.00% 50 50.00% 
2013 97 74 76.29% 23 23.71% 45 46.40% 52 53.60% 
2014 99 74 74.75% 25 25.25% 46 46.46% 53 53.54% 
Average 102.3 75.5 74.08% 26 25.36% 57.53 56.47% 44.8 43.64% 

Note: Data on gender were only informally collected between 1996 and 2002. However, in 2003, we began to formally ask the students to check the appropriate demographic category on their application forms.

aData not available;

bOnly partial data available.

Table 2B.

Diversity of students – racial/ethnic categories.

YearAmerican Indians/Alaskan NativesAsianNative Hawaiians or Pacific IslandersBlack or African AmericansWhiteHispanic or LatinoUnknown/not reportedTotal Jr. Faculty or FellowTotal
2003 38 57 102 102 
2004a 56 38 100 100 
2005 33 52 101 101 
2006 23 62 10 101 101 
2007 25 60 12 101 101 
2008 30 55 100 100 
2009 31 55 99 99 
2010 32 48 10 100 100 
2011 26 58 99 99 
2012 37 49 100 100 
2013 38 45 97 97 
2014 39 57 99 99 
YearAmerican Indians/Alaskan NativesAsianNative Hawaiians or Pacific IslandersBlack or African AmericansWhiteHispanic or LatinoUnknown/not reportedTotal Jr. Faculty or FellowTotal
2003 38 57 102 102 
2004a 56 38 100 100 
2005 33 52 101 101 
2006 23 62 10 101 101 
2007 25 60 12 101 101 
2008 30 55 100 100 
2009 31 55 99 99 
2010 32 48 10 100 100 
2011 26 58 99 99 
2012 37 49 100 100 
2013 38 45 97 97 
2014 39 57 99 99 

Note: Data on racial/ethnic categories were only informally collected between 1996 and 2002. However, in 2003, we began to formally ask the students to check the appropriate demographic category on their application forms.

a2004 application did not include an option for “Asian.”

Evaluation of Workshop Program, Learning Activities, and Faculty

Participants' onsite evaluations of the workshop program had high rates of return (75% to 99%, mean = 92%). A large percentage of the respondents indicated that the objectives of the workshop were met (95.7% to 99.9%; Supplementary Table S1).

With a rating of 4 being “Strongly Agree” and 1 being “Strongly Disagree”, mean ratings ranged from 3.5 to 3.7 for the didactic lectures, 3.7 to 3.8 for the Protocol Development Groups, and 3.4 to 3.5 for the Small Group Discussion Sessions (Supplementary Table S2). Participants' ratings of faculty in the Protocol Development Groups ranged from 3.7 to 3.9 with a mean rating of 3.8.

Evaluation of Students' Performance at the Workshop

Paired t test comparisons of mean scores on the 50–60 multiple-choice-question examinations revealed significant increases (P < 0.001) from the pre- to the postworkshop test scores for all of the classes (Table 3).

Table 3.

Pre- and postworkshop mean test score comparisons (2001–2014).

20012002200320042005200620072008200920102011201220132014
Examinees, N 101a 101a 102 103a 101 101 100 101a 98 101a 99 99 95 99 
Preworkshop test, mean score (SD) 49 (13) 52 (10) 47 (11) 58 (9) 55 (9) 57 (10) 54 (8) 52 (8) 56 (9) 51 (9) 55 (8) 58 (8) 55 (9) 58 (8) 
Postworkshop test, mean score (SD) 62 (10) 62 (8) 62 (9) 66 (9) 62 (10) 66 (9) 66 (9) 63 (8) 66 (9) 62 (9) 70 (9) 71 (8) 69 (9) 72 (8) 
20012002200320042005200620072008200920102011201220132014
Examinees, N 101a 101a 102 103a 101 101 100 101a 98 101a 99 99 95 99 
Preworkshop test, mean score (SD) 49 (13) 52 (10) 47 (11) 58 (9) 55 (9) 57 (10) 54 (8) 52 (8) 56 (9) 51 (9) 55 (8) 58 (8) 55 (9) 58 (8) 
Postworkshop test, mean score (SD) 62 (10) 62 (8) 62 (9) 66 (9) 62 (10) 66 (9) 66 (9) 63 (8) 66 (9) 62 (9) 70 (9) 71 (8) 69 (9) 72 (8) 

Note: There was a statistically significant increase each year from the pre- to the postworkshop test score (paired t test, P < 0.001).

aTotal includes Jr Biostatistical and/or Corporate Attendees whose career status was neither “Fellows” nor “Jr. Faculty.”

Faculty ratings of the students' protocol development indicated that on average 92% to 100% (mean = 99%) of the final protocols, which included a protocol concept sheet, a full protocol, and the informed consent form, were judged acceptable and ready for IRB submission (Table 4). Regarding their individual performance during the Protocol Development Groups, 98% to 100% (mean = 99%) of students were rated as having made progress (Table 4).

Table 4.

Faculty evaluation of students' final clinical trial protocols and progress in Clinical Trial Protocol Development Group (2002–2013).

Percentages of students at workshops held from 2002 to 2013 evaluated by the faculty as having an acceptable clinical trial protocol and having made progress
2002*20032004200520062007200820092010201120122013Mean
Protocol progress: Is the final protocol acceptable for submission to IRB? 92 98 98 97 96 97 100 97 100 95 98 93 97 
Student's progress: Did the student make progress on the protocol? 100 99 100 98 99 98 100 98 100 100 99 99 99 
Percentages of students at workshops held from 2002 to 2013 evaluated by the faculty as having an acceptable clinical trial protocol and having made progress
2002*20032004200520062007200820092010201120122013Mean
Protocol progress: Is the final protocol acceptable for submission to IRB? 92 98 98 97 96 97 100 97 100 95 98 93 97 
Student's progress: Did the student make progress on the protocol? 100 99 100 98 99 98 100 98 100 100 99 99 99 

Note: The evaluation items introduced in 2002 are indicated by an asterisk. In 2014, the questions were changed to provide additional narrative on each student and their progression through the workshop.

Evaluation of Students' Performance after the Workshop

Intermediate (1–4 year) and long-term (≥5 years) follow-ups of workshop students had a return rate of 32% to 58%. With that limitation, Supplementary Table S3 indicates the following: (i) 79% to 88% of the respondents remained in academia, (ii) 92% or more spent some to substantial time in patient-related research, (iii) 99% to 100% indicated that their participation in the workshop was valuable to conducting clinical trials and 89% to 93% to advancing their careers, and (iv) 48% to 63% have maintained contact with workshop faculty and/or fellow workshop students. Forty-three percent to 52% of the respondents had their workshop protocols approved by the IRB and funded 2 to 5 years after the workshop. In addition, 6% to 50% of the respondents had five or more additional protocols implemented 2 to 5 years after the workshop, respectively.

Results of the Thomson Reuters Study

The complete and detailed Thomson Reuters report is provided in the supplemental material, titled “Evaluation of the American Association for Cancer Research (AACR) Workshop” (Supplementary Materials S5). Following is a summary of the key findings.

The report was based on the MCCRW applicants of the years 2002, 2004, 2006, 2008, and 2010. The sample used for the analyses included 480 students (i.e., selected applicants) and 520 unsuccessful applicants (i.e., applicants not selected for the workshop). As Thomson Reuters reported, “Searching of the Web of Science found a total of 31,261 publications between 1999 and 2013 which mapped to students and applicants…” (Supplementary Materials S5).

Overall, the yearly number of publications was higher for the students than for the unsuccessful applicants during the 3-year period after the workshop as compared with the 3-year period preceding it (Fig. 2). “The average citation impact of the workshop participants increased between the before and after periods from around one and a half times the world average (1.52) to nearly twice the world average (1.95). The citation impact of unsuccessful applicants remained stable between the two periods (1.29)” (Supplementary Materials S5).

Figure 2.

Total yearly number of publications, before and after workshop, of applicants who attended versus those who did not attend the MCCRW. Plot courtesy of Dr. Yu Shyr.

Figure 2.

Total yearly number of publications, before and after workshop, of applicants who attended versus those who did not attend the MCCRW. Plot courtesy of Dr. Yu Shyr.

Close modal

Regarding clinical trials, 917 have been identified to which 346 workshop students and unsuccessful applicants had been linked. Of these, 205 are workshop students and 141 unsuccessful applicants (Supplementary Materials S5). It was found that (i) the students' number of clinical trials increased significantly between the periods before and after the workshop, and (ii) they collaborated with more coauthors and had larger collaborative networks than the unsuccessful applicants (Supplementary Materials S5).

This report presents the results of an NCI-sponsored training grant, titled “Methods in Clinical Cancer Research.” It has been well documented that there is a need for training more clinicians in clinical/translational research.

The workshop provides students with an intense week-long educational program that covers clinical trial methodology and the design and implementation of clinical trials through a mix of didactic lectures, Small Group Discussion Sessions, one-on-one mentoring, and Protocol Development Groups.

The students selected for the workshop are diverse in terms of gender, racial/ethnic backgrounds, oncology specialties, and training programs at various institutions. Even though this workshop has now been offered for over 20 years, there remains a continued “thirst” for it as evidenced by the fact that there are still many more well-qualified applicants than there are slots available to the student participants (Fig. 1).

By the educational value parameters measured, (i) the didactic sessions have met their goal of improving the students' level of knowledge based on pre- and postworkshop test scores; (ii) the unique Protocol Development Groups, with their deliverable of an IRB-ready clinical trial protocol and patient informed consent form as an end product before the student leaves the venue, have also been assessed as a successful endeavor; and (iii) the workshop outcomes as assessed by the students and faculty have all demonstrated the significant value of the workshop.

Students who completed the workshop and who responded to the follow-up surveys have largely remained in academic positions (79% to 88%), spent time ranging from “some” to “substantial” in clinical research (92% to 97% of respondents), and submitted their workshop protocols, as well as multiple other protocols to their IRBs, and implemented their protocols.

An independent and objective evaluation by an outside firm compared those applicants who were selected to attend the workshop as students with those not selected for the workshop (not inferring a causative relationship). The data indicate that the selected students had greater publication rates, increased citation impact, higher professional levels of working in clinical trials, and greater collaborations and larger collaborative networks.

The Protocol Development Groups are a unique feature of the workshop that cannot be provided at the students' home institutions. With the current size of the workshop (100 students and about 45 faculty), there are 12 Protocol Development Groups to which individuals are assigned for the week. Faculty members guide and mentor students through the protocol development process each day, building on lecture session topics. Students have daily assignments that build upon the previous day's work until the final assignment on the last day is submitted—i.e., a complete, IRB-ready protocol.

Throughout the Protocol Development Group sessions, a second tier of education and networking takes place among the students. In recent years, the Protocol Development Groups have been arranged, when possible, by disease site to encourage students to engage one another more openly and to build their own peer-to-peer networks that last well beyond the end of the workshop. Recent student evaluations have praised the Protocol Development Groups that were focused around similar disease sites instead of around phase I trials, phase II trials, biomarker trials, etc.

The faculty observed several important organizational points, including the following:

  • It is vitally important to have a biostatistician as one of the course directors.

  • The faculty roster should be frequently updated to reflect new focus areas such as patient advocacy, survivorship, biomarkers, novel imaging techniques, immunology, and many other timely topics. In particular, having patient advocates on the faculty has brought a very special and needed dimension to the workshop, especially the Protocol Development Groups.

  • Access to leaders in the field of clinical trial design and implementation enables students to leave the workshop with not only an expanded knowledge base, but also a core group of mentors and potential collaborators to enhance their careers (Table 5). Students build relationships with the faculty and other students that prove to be invaluable for the success of their trials and, more broadly, for their professional careers in clinical cancer research.

Table 5.

Average number of collaborators for students and applicants by cohort year.

Cohort yearStudents/applicantsAverage number of collaborators before the workshopAverage number of collaborators after the workshopChange
2002 Students 24.0 37.0 +13.0 
 Applicants 14.7 20.2 +5.5 
2004 Students 28.7 80.9 +52.2 
 Applicants 30.2 34.0 +3.8 
2006 Students 49.0 80.3 +31.3 
 Applicants 18.0 25.3 +7.3 
2008 Students 34.9 69.8 +34.9 
 Applicants 18.7 32.4 +13.7 
2010 Students 31.5 97.1 +65.6 
 Applicants 17.4 35.6 +18.2 
Cohort yearStudents/applicantsAverage number of collaborators before the workshopAverage number of collaborators after the workshopChange
2002 Students 24.0 37.0 +13.0 
 Applicants 14.7 20.2 +5.5 
2004 Students 28.7 80.9 +52.2 
 Applicants 30.2 34.0 +3.8 
2006 Students 49.0 80.3 +31.3 
 Applicants 18.0 25.3 +7.3 
2008 Students 34.9 69.8 +34.9 
 Applicants 18.7 32.4 +13.7 
2010 Students 31.5 97.1 +65.6 
 Applicants 17.4 35.6 +18.2 

Note: Comparison of the average number of collaborators of students who attended the workshop vs. applicants who did not attend the workshop. These data were gathered by Thomson Reuters as part of a study commissioned by AACR and ASCO to provide independent evaluation of the workshop.

Additionally, the following qualitative observations have been expressed by various course directors:

  • The protocols that are written and ultimately implemented reflect significant changes from the original concepts that the students bring to the workshop. This is the result of substantive input from the faculty and a direct measure of the learning that takes place during the workshop, all of which has led to improved clinical trial protocols.

  • Many important new concepts have been described/utilized in the course by the faculty, such as adaptive study designs, randomized discontinuation, N of 1, patient as own control, and multiple other designs.

  • Faculty enthusiasm is high year after year, with virtually no one wanting to decline an invitation to teach in the workshop. This is very encouraging given the intensity of the workshop, the amount of work involved in the teaching, the follow-up mentoring required, and the time that they have to spend away from their regular positions and families. The devotion and breadth of the faculty to this workshop have been exemplary (Supplementary Materials S4).

  • Although some have suggested putting the workshop online or publishing the syllabus, the faculty is of the strong opinion that these approaches will not capture the most important aspects of the course, e.g., the Protocol Development Groups and the one-on-one mentoring sessions with continued mentorship for years beyond the course. The faculty, students, and outside observers all agree that attending the workshop in person, with the opportunity for one-on-one interactions with the faculty, is essential to the training experience.

The MCCRW has been held since 1996, and over the course of nearly two decades it has inspired similar programs in Europe (Flims, Switzerland; now Zeist, The Netherlands), Australia [the Australia & Asia Pacific Oncology Research Development (ACORD) Workshop], and India [the Collaboration for Research Methods Development in Oncology (the CReDO) Workshop], which have been repeated on an annual or biannual basis, as well as individual programs in South America, Asia, the Middle East, and Africa. The workshop in Vail has become highly respected in the field, and these related workshops are often referred to and advertised as “the Vail Course in Europe” or “Based on the Vail Workshop.” Others have written about these workshops in Europe (53, 54). Comments on the Vail Workshop have also been published (55). Reischelman and colleagues (48) published a paper on MCCRW students from 1996 to 2004 and confirmed that mentorship was invaluable to oncologists in enhancing their research experience and expertise.

This workshop is one of the methods by which the NCI supports education and training. The NCI Career Development (K) award program, which supports investigators to develop their cancer research programs and achieve independence, has also been reviewed (56). That evaluation showed that the program had a positive impact “not only on participants' biomedical research careers, but also on achieving outcomes significant to the scientific enterprise.”

In addition to the K award, the Clinical Translational Science Award program recognizes how important mentorship is to that program. This has also been reviewed (57). We feel that the MCCRW adds significantly to that mentorship approach; of note is that through 2014, 18 of the students at the MCCRW have returned to join the faculty including two students who have joined as co-directors.

To keep the workshop timely and on pace with rapidly advancing cancer science, there are now a number of potential modifications and additions under consideration for future workshops including: practical molecular tumor boards, transcriptomics, FDA Breakthrough Designation, and real-world evidence and artificial intelligence (AI) approaches.

In summary, this 1-week, focused, and intensive training workshop has been shown to increase students' knowledge base, competence in clinical cancer research methods, and productivity in the clinical cancer research enterprise. Most of the students remain in patient-oriented clinical/translational research. Students have proven to have superior publication history, wider networks of collaborators, and stronger clinical trial participation than their counterparts who were not selected to participate in the workshop. Participants report that the MCCRW has been a focus for multiple collaborations and a great source of lifelong, invaluable mentorship.

There continue to be concerns about having an insufficient number of clinical cancer investigators. Based on the data presented, it is believed that this workshop will continue to be an invaluable asset to help in the training of clinical cancer investigators.

The ASCO/AACR Methods in Clinical Cancer Research Workshop is developed following the standards for independence articulated by the accrediting standards for continuing medical education. These standards include mandatory disclosure of financial relationships with commercial interests, identification of relevant conflicts, and implementation of mitigation strategies. Financial relationships reported by the directors and faculty are provided to attendees. During all phases of planning for the program, potential conflicts are mitigated through a peer-review process and/or through individual recusal when appropriate. Participants are made aware of commercial support consistent with accreditation standards.

D.D. Von Hoff reports other support from ASCO outside the submitted work. M.L. Disis reports grants from Pfizer, EMD Serono, Precigen, and Janssen outside the submitted work; in addition, M.L. Disis has a patent for University of Washington pending, issued, licensed, and with royalties paid. L.M. Ellis reports personal fees from Fibrogen and New Beta Innovation outside the submitted work. M. Hidalgo reports personal fees and other support from BMS, Agenus, Khar, InxMed, and OMTX; personal fees from Gemchen; and other support from Nelum and Champions outside the submitted work. P.M. LoRusso reports personal fees from AbbVie, Agios, Five Prime, GenMab, Halozyme, Genentech, CytomX, Takeda, SOTIO, Cybrexa, Agenus, Tyme, IQVIA, TRIGR, Pfizer, ImmunoMet, Black Diamond, GlaxoSmithKline, QED Therapeutics, AstraZeneca, EMD Serono, Shattuck, Astellas, Salarius, Silverback, MacroGenics, Kyowa Kirin, Kineta, Zentalis , Molecular Templates ABL Bio, SK Life Sciences, STCube, Bayer, and I-Mab, as well as other support from Roche-Genentech outside the submitted work. N.J. Meropol reports personal fees from Flatiron Health, an independent subsidiary of the Roche group, and other support from Roche outside the submitted work. J.D. Patel reports personal fees from AstraZeneca and Takeda outside the submitted work. D.A. Post reports grants from National Cancer Institute of the National Institutes of Health during the conduct of the study. M.M. Regan reports other support from Ipsen, grants from Bayer, personal fees from Tolmar Pharmaceuticals and WebMD, and grants, personal fees, and non-financial support from Bristol Myers Squibb outside the submitted work; as Director of IBCSG Statistical and Data Management Center, Dr. Regan oversees clinical trials funded by (and/or provision of drug supply) and/or translational research funded by Novartis, Pfizer, Ipsen, TerSera, Merck, Pierre Fabre, Roche, AstraZeneca, and Bristol Myers Squibb. J. Von Roenn reports other support from Conquer Cancer during the conduct of the study and other support from ASCO outside the submitted work. L.M. Weiner reports personal fees from Celldex Therapeutics, Cytomx Therapeutics, Jounce Therapeutics, Immunome, Tessa Therapeutics, and Samyang Biopharm USA; other support from Targeted Diagnostics & Therapeutics; and grants from Bioexcel Therapeutics outside the submitted work. No disclosures were reported by the other authors.

The authors would like to acknowledge and thank the course directors, faculty, and students of the workshop; the AACR; and the ASCO. They also want to thank Nina Cantafio of Triligent, for her expert help on this manuscript. This manuscript is dedicated to the memory of Merrill J. Egorin, MD, a great leader, faculty member, and fellow cancer warrior whom we lost to the disease, and to the memory of Charles A. Coltman, MD, a cofounder of the workshop and an outstanding dedicated mentor and leader in clinical trial design and patient care. This workshop has been partially supported by the NCI of the NIH under award number 5R25CA068647. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

1.
Wyngaarden
JB
. 
The clinical investigator as an endangered species
.
N Engl J Med
1979
;
301
:
1254
9
.
2.
Nathan
DG
. 
Clinical research: perceptions, reality, and proposed solutions. National Institutes of Health Director's Panel on Clinical Research
.
JAMA
1998
;
280
:
1427
31
.
3.
Sung
NS
,
Crowley
WF
 Jr
,
Genel
M
,
Salber
P
,
Sandy
L
,
Sherwood
LM
, et al
Central challenges facing the national clinical research enterprise
.
JAMA
2003
;
289
:
1278
87
.
4.
Nathan
DG
,
Wilson
JD
. 
Clinical research and the NIH–a report card
.
N Engl J Med
2003
;
349
:
1860
5
.
5.
Goldhamer
ME
,
Cohen
AP
,
Bates
DW
,
Cook
EF
,
Davis
RB
,
Singer
DE
, et al
Protecting an endangered species: training physicians to conduct clinical research
.
Acad Med
2009
;
84
:
439
45
.
6.
Butler
D
. 
Translational research: crossing the valley of death
.
Nature
2008
;
453
:
840
2
.
7.
Roberts
SF
,
Fischhoff
MA
,
Sakowski
SA
,
Feldman
EL
. 
Perspective: transforming science into medicine: how clinician-scientists can build bridges across research's "valley of death."
Acad Med
2012
;
87
:
266
70
.
8.
Garrison
HH
,
Descamps
AM
. 
NIH research funding and early career physician scientists: continuing challenges in the 21st century
.
FASEB J
2014
;
28
:
1049
58
.
9.
Ersek
JL
,
Graff
SL
,
Arena
FP
,
Denduluri
N
,
Kim
ES
. 
Critical aspects of a sustainable clinical research program in the community-based oncology practice
.
Am Soc Clin Oncol Educ Book
. 
2019
;
39
:
176
84
.
10.
Flores
G
,
Mendoza
FS
,
DeBaun
MR
,
Fuentes-Afflick
E
,
Jones
VF
,
Mendoza
JA
, et al
Keys to academic success for under-represented minority young investigators: recommendations from the Research in Academic Pediatrics Initiative on Diversity (RAPID) National Advisory Committee
.
Int J Equity Health
2019
;
18
:
93
.
11.
Pusztai
L
. 
Chips to bedside: incorporation of microarray data into clinical practice
.
Clin Cancer Res
2006
;
12
:
7209
14
.
12.
Wheeler
HE
,
Gamazon
ER
,
Wing
C
,
Nhiaju
UO
,
Njoku
C
,
Baldwin
RM
, et al
Integration of cell line and clinical trial genome-wide analyses supports a polygenic architecture of paclitaxel-induced sensory peripheral neuropathy
.
Clin Cancer Res
2013
;
19
:
491
9
.
13.
Juweid
ME
,
Cheson
BD
. 
Positron-emission tomography and assessment of cancer therapy
.
N Engl J Med
2006
;
354
:
496
507
.
14.
Lindner
AU
,
Concannon
CG
,
Boukes
GJ
,
Cannon
MD
,
Llambi
F
,
Ryan
D
, et al
Systems analysis of BCL2 protein family interactions establishes a model to predict responses to chemotherapy
.
Cancer Res
2013
;
73
:
519
28
.
15.
Choi
H
. 
Critical issues in response evaluation on computed tomography: lessons from the gastrointestinal stromal tumor model
.
Curr Oncol Rep
2005
;
7
:
307
11
.
16.
Tuma
RS
. 
Sometimes size doesn't matter: reevaluating RECIST and tumor response rate endpoints
.
J Natl Cancer Inst
2006
;
98
:
1272
4
.
17.
Kelloff
GJ
,
Hoffman
JM
,
Johnson
B
,
Scher
HI
,
Siegel
BA
,
Cheng
EY
, et al
Progress and promise of FDGPET imaging for cancer patient management and oncologic drug development
.
Clin Cancer Res
2005
;
11
:
2785
808
.
18.
Kelloff
GJ
,
Lippman
SM
,
Dannenberg
AJ
,
Sigman
CC
,
Pearce
HL
,
Reid
BJ
, et al
AACR Task Force on Cancer Prevention. Progress in chemoprevention drug development: the promise of molecular biomarkers for prevention of intraepithelial neoplasia and cancer–a plan to move forward
.
Clin Cancer Res
2006
;
12
:
3661
97
.
19.
European Commission
. 
European Medicines Agency Conference on the Operations of Clinical Trials Directive (Directive 2001/20/EC) and Perspectives for the Future
. 
2006
;
London, UK
.
p.
1
76
.
20.
Johnson
JR
,
Ning
YM
,
Farrell
A
,
Justice
R
,
Keegan
P
,
Pazdur
R
. 
Accelerated approval of oncology products: the food and drug administration experience
.
J Natl Cancer Inst
2011
;
103
:
636
44
.
21.
Johnson
JR
,
Williams
G
,
Pazdur
R
. 
End points and United States Food and Drug Administration approval of oncology drugs
.
J Clin Oncol
2003
;
21
:
1404
11
.
22.
George
GC
,
Barata
PC
,
Campbell
A
,
Chen
A
,
Cortes
JE
,
Hyman
DM
, et al
Improving attribution of adverse events in oncology clinical trials
.
Cancer Treat Rev
2019
;
76
:
33
40
.
23.
Feehan
AK
,
Garcia-Diaz
J
. 
Investigator responsibilities in clinical research
.
Ochsner J
. 
2020
;
20
:
44
9
.
24.
Sherman
Re
,
Li
J
,
Shapley
S
,
Aobb
M
,
Woodcock
J
. 
Expediting drug development - The FDA's New Breakthrough Therapy
.
N Engl J Med
2013
;
369
:
1877
80
.
25.
Woodcock
J
. 
Drug development in serious diseases: the new “Breakthrough Therapy” designation
.
Clin Pharmacol Ther
2014
;
95
:
483
5
.
26.
Yao
JC
,
Meric-Bernstam
F
,
Lee
JJ
,
Eckhardt
SG
. 
Accelerated approval and breakthrough therapy designation: oncology drug development on speed?
Clin Cancer Res
2013
;
19
:
4305
8
.
27.
Shader
RI
. 
Reflections on 2013, the beginning of 2014, and the Food and Drug Administration's new expedited approval program known as breakthrough product designation
.
Clin Ther
2013
;
35
:
1865
6
.
28.
Chabner
B
. 
Phase II cancer trials: out of control?
Clin Cancer Res
2007
;
13
:
2307
8
.
29.
Michaelis
LC
,
Ratain
MJ
. 
Phase II trials published in 2002: a cross-specialty comparison showing significant design differences between oncology trials and other medical specialties
.
Clin Cancer Res
2007
;
13
:
2400
5
.
30.
Simon
R
,
Maitournam
A
. 
Evaluating the efficiency of targeted designs for randomized clinical trials
.
Clin Cancer Res
2004
;
10
:
6759
63
.
31.
Ratain
MJ
. 
Phase II oncology trials: let's be positive
.
Clin Cancer Res
2005
;
11
:
5661
2
.
32.
Redman
M
,
Crowley
J
. 
Small randomized trials
.
J Thorac Oncol
2007
;
2
:
1
2
.
33.
Slater
EE
. 
Today's FDA
.
N Engl J Med
. 
2005
;
352
:
293
7
.
34.
Stadler
WM
. 
The randomized discontinuation trial: a phase II design to assess growth-inhibitory agents
.
Mol Cancer Ther
2007
;
6
:
1180
5
.
35.
Bedard
PL
,
Krzyzanowska
MK
,
Pintilie
M
,
Tannock
IF
. 
Statistical power of negative randomized controlled trials presented at American Society for Clinical Oncology annual meetings
.
J Clin Oncol
2007
;
25
:
3482
7
.
36.
Fischer
F
,
Helmer
S
,
Rogge
A
,
Arraras
JI
,
Buchholz
A
,
Hannawa
A
, et al
Outcomes and outcome measures used in evaluation of communication training in oncology - a systematic literature review, an expert workshop, and recommendations for future research
.
BMC Cancer
2019
;
19
:
808
.
37.
Cartmell
KB
,
Bonilha
HS
,
Simpson
KN
,
Ford
ME
,
Bryant
DC
,
Alberg
AJ
. 
Patient barriers to cancer clinical trial participation and navigator activities to assist
.
Adv Cancer Res
2020
;
146
:
139
66
.
38.
Berry
DA
,
Herbst
RS
,
Rubin
EH
. 
Reports from the 2010 clinical and translational cancer research think tank meeting: design strategies for personalized therapy trials
.
Clin Cancer Res
2012
;
18
:
638
.
39.
Rahib
L
,
Fleshman
JM
,
Matrisian
LM
,
Berlin
JD
. 
Evaluation of pancreatic cancer clinical trials and benchmarks for clinically meaningful future trials: a systematic review
.
JAMA Oncol
2016
;
2
:
1209
16
.
40.
IOM (Institute of Medicine)
. 
A National Cancer Clinical Trials System for the 21st Century: Reinvigorating the NCI Cooperative Group Program
.
Washington, DC (USA)
:
The National Academies Press
; 
2010
.
41.
Sargent
D
. 
What constitutes reasonable evidence of efficacy and effectiveness to guide oncology treatment decisions?
Oncologist
2010
;
15
:
19
23
.
42.
Booth
B
,
Glassman
R
,
Ma
P
. 
Oncology's trial
.
Nat Rev Drug Discov
2003
;
2
:
609
10
.
43.
Woodcock
J
. 
Assessing the clinical utility of diagnostics used in drug therapy
.
Clin Pharmacol Ther
2010
;
88
:
765
73
.
44.
Neidle
S
. 
Cancer drug design and discovery
. In:
Neidle
S
, editor.
Chapter 18
.
Academic Press
; 
2008.
p.
424
.
45.
Berry
DA
. 
Adaptive clinical trials in oncology
.
Nat Rev Clin Oncol
2011
;
9
:
199
207
.
46.
Harrison
RK
. 
Phase II and Phase III failures: 2013-2015
.
Nat Rev Drug Discov
2016
;
15
:
817
8
.
47.
Walker
I
,
Newell
H
. 
Do molecularly targeted agents in oncology have reduced attrition rates?
Nat Rev Drug Discov
. 
2009
;
8
:
15
6
.
48.
Riechelman
RP
,
Townsley
CA
,
Pond
GR
,
Siu
LL
. 
The influence of mentorship on research activity in oncology
.
Am J Clin Oncol
2007
;
30
:
549
55
.
49.
Chopra
V
,
Edelson
DP
,
Saint
S
. 
A PIECE OF MY MIND. Mentorship Malpractice
.
JAMA
2016
;
315
:
1453
4
.
50.
Dickler
HB
,
Fang
D
,
Heinig
SJ
,
Johnson
E
,
Korn
D
. 
New physician-investigators receiving National Institutes of Health research project grants: a historical perspective on the "endangered species."
JAMA
2007
;
297
:
2496
501
.
51.
Hortobagyi
GN
American Society of Clinical Oncology
. 
A shortage of oncologists? The American Society of Clinical Oncology workforce study
.
J Clin Oncol
2007
;
25
:
1468
9
.
52.
National Research Council (US) and Institute of Medicine (US) Committee. Opportunities to Address Clinical Research Workforce Diversity Needs for 2010
;
Hahm
J
,
Ommaya
A
,
editors
.
Washington (DC)
:
National Academies Press (US)
; 
2006
.
Available from
: http://www.ncbi.nlm.nih.gov/books/NBK20276/
53.
Bell
S
. 
Flims: Building the next generation of clinical researchers
.
Cancer World
2004
;
42
5
.
54.
Palmieri
C
,
Wanaski
S
,
Panse
J
,
Medeiro
B
. 
The future of clinical cancer research: who's teaching the next generation? The Flims–Vail model
.
Eur J of Cancer
2004
;
40
:
173
5
.
55.
Sledge
GW
. 
Musings of a cancer doctor: Chronic
.
Oncol Times
2012
;
34
:
10
11
.
56.
Mason
JL
,
Lei
M
,
Faupel-Badger
JM
,
Ginsburg
EP
,
Seger
YR D
,
Joseph
L
, et al
Outcome evaluation of the National Cancer Institute career development awards program
.
J Cancer Educ
2013
;
28
:
9
17
.
57.
Fleming
M
,
Burnham
EL
,
Haskins
WC
. 
Mentoring translational investigators
.
JAMA
2012
;
308
:
1981
5
.