In 2001, in response to identified barriers to the efficient development of cancer therapeutics, one of us called for a comprehensive reappraisal of the national program, including the policies of the Food and Drug Administration (FDA; ref. 1). Concern was expressed that the systems for cancer therapy development had remained basically unchanged for a period of three decades. Despite the growing scientific opportunities and significant investment being made by all sectors, the output of new cancer therapies reaching the public in the form of FDA-approved products was insufficient to meet the needs of patients. The message was largely received with deafening silence. This was not unexpected because one of the issues raised in the review was the basic state of complacency within the scientific community. In a follow-up article, we evaluated the productivity of the National Cancer Program using the metrics of numbers and indications of new cancer therapeutics reaching the public (2). We once again concluded that there was significant lost opportunity; in order for the multibillion-dollar public-private investment in the National Cancer Program to fulfill its mandate to reduce morbidity and mortality, a bold commitment to revisit the entire menu of policies that affect the timely delivery of scientific advances would be required. There are now indications that the message has registered with the FDA, and that changes, such as the Critical Path initiative, are being implemented to provide a more responsive regulatory environment (3, 4); other sectors, however, have remained largely silent. In this brief review, we provide a restatement of the problem and provide several recommendations for improving the model.
By any objective assessment, there can be little doubt that cancer is one of the most important issues facing our society. In 2005, the incidence in the United States was >1.3 million; ∼50% of American men and one third of women are afflicted sometime in their lives. There were an estimated 600,000 deaths due to cancer, thus accounting for approximately one fourth of all deaths in the United States and forming the leading cause of death for all individuals under the age of 75 years (5). The costs of cancer measured in pure economic terms (i.e., direct medical expenses and lost productivity) is increasing at an exponential rate and is currently estimated to be in excess US$190 billion (5).
Have the cancer research community and responsible governmental agencies provided an adequate and comprehensive response to the problem commensurate with the threat? In 1971, the National Cancer Act established the National Cancer Program. This focused the effort within programs of the National Cancer Institute with a direct reporting relationship to the President of the United States, combined with the expected contributions from the private sector. Over the succeeding 35 years, we have witnessed a vastly improved understanding of the biology of the disease. However, after the initial and dramatic success in the management of specific diseases, such as acute lymphocytic leukemia in children, Hodgkin's disease, non-Hodgkin's lymphoma, Wilm's tumor, and testicular cancer, there has been only a modest, incremental improvement in the overall percent of diagnosed patients who survive their disease as a result of effective treatment. The incidence of all cancers continues to grow, whereas the budget proposed by Congress for the support of the National Cancer Institute will be effectively decreased in 2006 relative to the previous year (6). Has our government and society become inured as to the importance of cancer as a national problem and allowed it to become submerged by other problems competing for budget and prioritization? Whatever the explanation, there remains an expectation that the efforts of the National Cancer Program and the vast resources made available from public and private sources will move the growing list of scientific advances through the narrow funnel of FDA review and approval, and that a large number of highly effective and safer medicines will be delivered in the near term. That goal may not realistic with the present systems.
We have long since learned that “cancer” is a term that defines >100, perhaps hundreds of, different diseases that can arise from virtually any tissue or organ in the body and while sharing common properties of local invasion and distant spread may have different causative factors, natural history of disease, methods for diagnosis, and, important to this discussion, ways in which they are treated. The advances in our understanding of the intricate mechanisms for transformation of a normal cell to one we recognize as “cancer,” and the complementary systems of control, present a much more complex set of challenges for the diagnostic and therapeutic disciplines than originally appreciated. The direction of modern developmental therapeutics requires that a new treatment must not only address a disease defined by the histology and anatomic site from which it arose (breast, lung, lymph node, etc.) but also the specific molecular, genetic, or immunologic subtype. In parallel development paths, clinically predictive molecular markers must be prepared and validated so that they can be employed for the prospective selection of patients whose tumors meet the biological profile required for selection for a highly specific treatment. Indeed, some have come to believe that it may not be possible in the future to develop a new drug unless one can precisely and prospectively select eligible patients using marker technology. As a concept, this is not entirely new. In the case of breast cancer, we have recognized their importance and, for several decades, employed the estrogen receptor assay for such purposes and more recently have included testing for HER2 expression for assessing prognosis and selecting appropriate treatment. The expansion of our knowledge of cancer biology has created options for many other tumors, and there is now the possibility that we can achieve the long sought-after goal of selective and specific treatment.
However, there are consequences that come with tumor subtyping and highly targeted treatment. Older classes of drugs, such as alkylating agents and certain antimetabolites, are less discriminate in their attack and often provide a relatively broad spectrum of tumor types that might be effectively treated. It is anticipated that targeted therapies, in contrast, will have their application increasingly restricted to a subset of patients whose tumors display the required expression of a biological or genetic abnormality. It is logical to assume that a much larger number of therapeutic options will be required to account for the heterogeneity of molecular expressions that are being defined by a new lexicon of scientific disciplines.
Optimism for targeted therapy, which now runs deep in the field of developmental therapeutics, is largely driven by the success of imatinib (Gleevec), a therapy that truly revolutionized the management of chronic myelogenous leukemia (7). Chronic myelogenous leukemia remained recalcitrant to our best efforts to improve treatment for several decades, and the very high and reasonably durable responses represent a dramatic change. But is Gleevec a uniquely successful application of the principals of rational drug design or something that can be applied with equivalent success more broadly throughout the field of oncology? We suspect the former. Chronic myelogenous leukemia is a disease in which a dominant molecular abnormality operates in the majority of the cases, one that creates an unregulated ABL tyrosine kinase that can be inhibited. The total effort to understand the disease and create an effective treatment spans a period of ∼40 years, arising from the discovery of the Philadelphia chromosome in 1960 (8). For most other tumors, particularly solid tumors, we are dealing with panoply of mutations. It is likely in this setting that a multiprong approach, involving several concomitantly given agents, will continue to be required to effect clinically important and lasting control.
Thus, whereas the goal of rational drug design of new cancer therapeutics versus the “enlightened empiricism” of the past is becoming a reality, there is an obvious paradox to this “success”: more, not fewer, drugs will be required. What are the prospects that the field can respond to this significant and growing challenge?
Drug development is a long-term, high-risk, and very expensive enterprise, with a well-recognized high rate of attrition. It is estimated that for every 5,000 drug candidates entering development, only five will enter clinical testing (9). As cited in the FDA's Innovation or Stagnation document published in 2004, a new agent entering phase 1 testing, often after a decade or more of preclinical evaluation, is estimated to have only an 8% chance of reaching the public in the form of a marketed product (3). The cost of bringing a new medicine to market is estimated to range from US $800 million to as much as US $1.7 billion (9, 10). This is clearly not a vocation for the weak, timid, or poorly capitalized.
The escalating R&D expenses eventually find their way to the public in the form of very expensive medicines. As one example, the cost of a 2-month course of drug combinations for a patient with advanced colorectal cancer has been estimated to have increased, in just a few years, from roughly US$60 to approximately US$20,000 to US$30,000 (11). The serious incremental cost of adding bevacizumab (Avastin) to standard regimens of chemotherapy, for months (not years) of improved survival, recently reached the editorial pages of the New York Times (12). This dramatic and seemingly unsustainable trend may create barriers for the clinical adoption of advances in biomedical research by formularies, including Medicare. Moreover, there may develop unacceptable disparities in the availability of such treatments to specific sectors of our society with the attendant ethical concerns. This reinforces the need to review the effectiveness of our current systems and policies for developing anticancer agents and to make revisions where necessary for the purpose of reducing R&D cost and the associated justification for the high costs of modern therapeutics.
What is the rate of success of our national efforts to create new cancer treatments? In any given year, there are ∼400 new anticancer agents in development. Of these, only a handful of the total make it through the labyrinth of development and reach the public in the form of an approved product each year. In 2003, the Oncology Division of the FDA published a review of approved cancer therapies for the proceeding 13-year period (13). We examined these data to determine the rate at which new chemical entities, as distinguished from supplemental indications for agents already approved, achieved their first marketing authorization (2). The FDA authors listed 38 new chemical entities that received marketing authorization during this time frame, of which four represented older drugs presented in new delivery forms, such as liposomes. Of the 38, five were not true anticancer agents but therapies that can either reduce the symptoms of cancer or the toxicities associated with other therapies, in themselves worthwhile goals. The bottom line is that an average of three new chemical entities were being approved per year and only 2.5 agents that actually treat cancer (2). Second-line indications predominated in the initial approval of a new chemical entity. Many of the marketing authorizations were for diseases that are truly orphan indications, such as Kaposi's sarcoma, hairy cell leukemia, and cutaneous T-cell lymphoma. Only one or no new therapies for important forms of cancer, such as adenocarcinomas of the stomach and pancreas, or cervical, and head and neck cancers, were listed. Although data were not provided as to how many new drugs were abandoned, the generally accepted figure is that only 1 in 20 cancer investigational new drug applications survive to become approved new drug applications.
Efficient therapeutic development is affected by a long list of potential barriers. Past preclinical models for predicting the potential efficacy of a new therapeutic have proved to be inadequate. Rational design of new drugs based on the identification of appropriate molecular targets shows considerable promise, but each new technical advance must be validated in clinical testing, a suitable definition and goal of translational research. The percentage of eligible adult cancer patients in the United States who participate in clinical trials is estimated to be only 2% to 3%, and increasing competition for the limited resource has required drug sponsors to seek cases from other regions of the world to meet accrual requirements. Bad development, whether it comes in the form of poor study design or faulty execution of a protocol, is unfortunately all too common. However, much of the limited productivity in our judgment can be attributed to two major factors: the continued use of old and essentially unmodified systems of development and FDA policy.
In November of 2002, a new FDA Commissioner, Dr. Mark McClellan, was appointed. As a priority, he established a line of communication and collaboration with the newly appointed Director of the National Cancer Institute, Dr. Andrew von Eschenbach. A formal Interagency Agreement, announced on May 30, 2003, represented a true milestone: the FDA aligned itself with the goals of the National Cancer Program (14). Under Dr. McClellan's leadership, the Agency acknowledged the issue of a low rate of productivity, as measured by drug approvals, and in response formulated a set of new policies that are articulated in the document “Innovation or Stagnation” (3). Although his tenure was relatively short, as he moved to head up the Center for Medicare and Medicaid Services in 2004, his leadership may have been translated into a more active approval process. In 2003, three new chemical entities for anticancer therapies were approved: gefitinib (Iressa), bortezomib (Velcade), and tositumomab (Bexxar). In 2004, six new chemical entities were approved: pemetrexed (Alimta), cetuximab (Eribitux), bevacizumab (Avastin), azacitidine (Vidaza), erlotinib (Tarceva), and clofarabine (Clolar). In 2005, the total output of new chemical entities was 4: a new suspension formulation of paclitaxel (Abraxane), nelarabine (Arranon), lenalidomide (Revlimid), and sorafenib (Nexavar).
Does this represent progress? When viewed in the context of need, >100(s) of different diseases, multiple subtypes, and increasingly targeted therapy, the reality is that the current rate of delivery of new anticancer agents to the public remains disappointingly low. This may have unforeseen consequences. A seemingly limited degree of productivity, measured in the form of therapies that alter national statistics, may produce a less than enthusiastic response to requests for a significantly increased public investment in cancer research. In essence, the interests of the cancer patient and scientific research communities are aligned; all share a common goal in seeing that the National Cancer Program (to the extent that it is dependent upon the delivery of new therapeutics) is successful.
What recommendations can be offered to assist in forming a response to the challenge? In past reviews of the subject, we have suggested a comprehensive review of all systems and policies to determine whether they are still valid and then to discard those that can no longer be supported (1, 2). For the purpose of this discussion and brevity, we will focus on a few suggestions that might, if implemented, begin to improve the efficiency of the current systems of development and regulatory approval.
Move More Promptly to Human Testing
Despite the elegance of the science, the only data that count for the cancer patient are those derived from the relevant species (humans), and we need to focus our attention on how we can obtain these data efficiently and, of course, ethically. The first requirement is to reduce our reliance on animal data for both the selection of drugs for therapeutic development and for predicting the range of safety concerns that might be encountered in clinical testing. Forty years of experience in screening for new anticancer agents has been humbling, whether involving in vitro human tumor stem cell assays, with cells representing specific forms of cancer with which to assess growth inhibition or cell killing properties of a new therapeutic, or the use of in vivo xenografts. Although such data provide crude estimates of cytotoxicity, none have been retrospectively or prospectively validated for their ability to predict for efficacy in humans, much less identify activity for specific forms of cancer. The substitution of molecular mechanisms of tumor inhibition and rational drug design represent the current paradigm, and there is growing evidence that this approach has promise (15–18).
In the case of safety testing, the work of Freireich et al. showed that a range of animal species provide useful data for estimating a safe starting dose for phase I or early translational studies (19). However, for the estimation of organ-specific toxicity, analyses have shown that animal testing is of very limited value (20, 21). There is gross overprediction for some forms of toxicity, a factor that might prevent potentially useful therapeutics from continuing through development. And there are false negatives for some of the most common adverse reactions associated with anticancer therapies; for example, monkeys are resistant to emetogenic and thrombocytopenic properties of conventional anticancer agents, and some of the most important and usual toxicities are missed, as was the case with anthracycline cardiac toxicity in the original dog and monkey toxicology. The FDA, in their Innovation or Stagnation document, seem to agree in principle that “most tools for toxicology and human safety testing are decades old,” and traditional animal toxicology is described as laborious, time-consuming, requiring large amounts of product, and failing to predict (3). Although toxicology requirements for anticancer agents were somewhat truncated while one of us served as Chairman of the FDA's Oncologic Drug Advisory Committee, pharmaceutical companies continue to generate large volumes of animal toxicology information, a process requiring substantial budgets and time. The reality is that (a) no one can fully interpret this information; (b) it is not viewed as sufficiently important to be included in FDA approved prescribing information; and (c) it is almost never referred to by physicians. The bottom line is that the intact human subject remains the only fully validated model of safety and efficacy. Can we move more quickly from the lab to “proof-of-concept” studies? There are lessons to be learned from the Cancer Research Campaign program, now part of Cancer Research UK. In 1980, the Cancer Research Campaign prepared toxicology guidelines that relied on studies conducted in two rodent species for phase I testing of a new agent, basically for the estimate of a safe starting dose. Over a 10-year period, 50 new anticancer therapies were safely introduced into clinical testing using this system, which led to revisions in the guidelines for preclinical toxicology (22).
To begin to address this issue of making more accurate decisions, the FDA and National Cancer Institute have recently introduced The Exploratory Investigational New Drug Application, a mechanism that is available for obtaining human data for the purposes of selecting amongst drug candidates for full development (23). The rationale is based on the recognition that <10% of investigational new drug applications for new molecular entities progress beyond the investigational stage, and that there is a need to identify those products that are unlikely to succeed early in the process and to redirect resources. Importantly, the FDA believes that existing regulations allow a great deal of flexibility in terms of the amount of preclinical data needed to undertake this type of “proof-of-concept” clinical investigation. An exploratory investigational new drug application study can use subtoxic doses for a limited period (<7 days) for the purpose of achieving a pharmacologic effect, not the traditional maximum tolerated dose. Examples might include pharmacokinetic/pharmacodynamic studies that correlate concentration × time data with tumor marker inhibition, or functional imaging studies, such as positron emission tomography scans, where “proof-of-principle” data can be derived from changes in 2-fluoro-2-deoxy-d-glucose uptake after a limited exposure to a new therapy. Will drug sponsors implement this available mechanism? Old traditions die slowly, but the potential cost-savings derived from more rational “go/no go” decision-making coupled with the selection of patients with susceptible tumors based on molecular markers may lead to its adoption.
More Liberal Application of the Accelerated Approval Mechanism
If our goal is to increase the yield of new anticancer agents for each disease and subtype, there must be commensurate regulatory policies that serve to facilitate this expectation. Accelerated Approval represents an important mechanism to achieve this objective, and we recommend that its current application be expanded. As currently defined, drugs approved under the Accelerated Mechanism must provide a benefit over available therapies. “Available therapy” was defined by the FDA in June 2003 as drugs that had received full, unrestricted (regular) approval for the specific indication (24). To achieve approval in this setting would require randomized controlled trials, a route to the market that is not especially more rapid when compared with regular approval. Sponsors have frequently reacted by opting to evaluate their therapies in refractory tumors where, by definition, no available therapy exists. In this setting, single arm trials using surrogate end points, including tumor response rates, can serve as the basis for approval (24). The drug is typically tested as 2nd, 3rd, or in the current situation with colorectal cancer, as potentially 4th or 5th line therapy. Predictably, most drugs will show, at best, modest activity in this setting. Moreover, the resulting data may not predict how the same agents will perform in more favorable clinical circumstances. Oxaliplatin (Eloxatin), trastuzumab (Herceptin), and bevacizumab (Avastin) are recent examples where single-agent data were at the threshold for fulfilling any criteria for showing useful therapeutic activity. The same agents, however, have contributed to significantly improved survival outcomes when tested as a component of first line or adjuvant therapy. This presents an important challenge to both drug sponsors who must decide whether to continue to commit development resources, time and money, and regulatory agencies who sit in judgment as to whether to release a new agent to the public based on refractory disease data. Overall, there is a risk that potentially usefully therapies will be abandoned prematurely.
We believe that the benefits derived from the broad application of the accelerated approval mechanism for cancer therapies with promising activity, based on early efficacy data, outweigh potential risks. This is especially the case in settings where therapeutic options are limited and/or long-term, disease-free survival is not a realistic goal with available treatments. In this situation, we propose that the requirement for showing superiority to “available therapies” be waived. The reality is that in the management of most advanced solid tumors, drugs that qualify for regular approval for an indication rarely fulfill the ultimate goal of increasing cure rates. We must, therefore, set reasonable expectations. “Blockbusters” are welcome, but most advances are incremental, and their use in the subsequent development of effective combinations is the rule rather than the exception. History has shown that we need a large number of agents with which to build effective combinations and adjuvant therapies. This is a process that takes many years, perhaps a decade of additional study after an initial approval, and it cannot be effectively initiated unless the building blocks are available in the form of marketed products. Additionally, safeguards are in place. Under the accelerated mechanism, the FDA has the authority to remove a drug from the market if clinical benefit is not shown in post-marketing trials. Lastly, the market is efficient: oncologists, as a group, are very well networked and receive new information rapidly from multiple reliable sources. Less effective or poorly tolerated drugs are discarded if better alternatives are made available.
Approval Criteria Should Be Defined by a Favorable Benefit to Risk Assessment
The FDA issued a draft of an important guidance document on Clinical Trial Endpoints for the Approval of Cancer Drugs and Biologics in April of 2005 (25). It reflects the current thinking of the agency complemented by discussions at public meetings of experts and the Oncologic Drug Advisory Committee. The document identifies and discusses the suitability of end points used to show clinical benefit (e.g., survival and improvement in tumor-related symptoms) and others that could serve as surrogates, such as tumor response and progression or disease-free survival. Typically, these surrogates have only qualified as a basis for approval under the Accelerated Mechanism model.
Although we respect the serious effort and intent of those who participated in the creation of the Endpoints document, we remain frustrated by the rate of progress and, as an extension, believe that there is a responsibility to assess continuously the evolving dogma to see if we can do better. In essence, are the newly proposed guidelines “evidence based,” using the current jargon in medical research, or an idealized system with unrealistic expectations relative to the life-threatening nature of the disease and the willingness of dying patients to accept risks?
We raise this issue in the context of the criteria for approval used during the 1950s through early 1980s, when tumor response and duration was sufficient to make judgments. Some of our most important anticancer agents gained marketing authorization during that period of time, such as cyclophosphamide, methotrexate, 5-fluorouracil, doxorubicin, cisplatin, tamoxifen, and cytosine arabinoside to name only a few. These older agents remain the mainstay of systemic management in 2006, and in some circumstances, they have been shown to possess remarkable degrees of efficacy and even cure. These achievements, of course, took years (decades in some instances) of additional work following the initial approval, and in some instances, we are still learning how to use them most effectively and safely. New indications, doses, schedules, and modes of delivery evolved and were reported in journal articles and presentations, whereas formal FDA product labeling could not keep up with the dynamic changes in clinical use. The important factor was that drugs were available and without these building blocks there would have been no prospect for progress.
In the mid-1980s, the FDA changed its policies to require demonstration of survival benefit, typically in two controlled randomized trials, based on statistical criteria, sometimes over clinical judgment: all aided and abetted by successive, conservative-minded Oncologic Drug Advisory Committees. A difference is considered to be “statistically significant” and hence the therapy “effective” only if the overall P (α level or type I error) is ≤0.05. If there are multiple end points or interim analyses, the α level for the individual comparisons are required to be adjusted, resulting in the need for even smaller P values to define a “statistically significant difference.” This has resulted in the need for very large trials with long periods of follow-up. Moreover, it should be noted that sample sizes needed to assure sufficient power to detect differences as statistically significant are only as accurate as the historical data from which they are estimated. Accruals for other than the most prevalent tumors types are often difficult to achieve. With more targeted therapies, the patient pool may be further limited. An overall type I error of 5% is an arbitrary cutoff that goes back to the work of Sir Ronald Fisher in the 1920s (26). Perhaps, it is time to reexamine the entrenched criteria that has dominated the process of cancer drug review: cancer patients who are dying of their disease and have few available therapeutic options are likely to accept a >5% chance that a well considered regulatory judgment might be wrong.
Did the tightening of regulatory policies in the 1980s result in better drugs making their way to the public? This is a question that is worthy of analysis and debate, but we submit that it would be difficult to show a difference commensurate with the increase in data requirements. What are indisputable is that during the period of the 1980s, 1990s, and early 2000s, (a) relatively few new chemical entities made it through the process, (b) and that the process can be characterized as long, risk laden, and expensive. With a mind set of “protecting the American public” by imposing potentially overly rigorous and perhaps unrealistic criteria of review, there is a risk that the converse may be achieved in the form of delayed or denied access to useful treatments. In essence, there is a potential for an abrogation of a “patient right”: timely access to a therapeutic advance for their lethal disease.
In summary, we have entered into a time of increased promise for developmental therapeutics, and there are expectations that the new science will translate into a large number of more effective and safer new therapeutic candidates for development. We may also be in a position to more closely align new treatments with the molecular profile of individual tumors in the hope of increasing specificity. However, there is an urgent need to adjust our policies and practices to better exploit the opportunities arising out of the advances in cancer biology. A scientific discovery, in this context, is not meaningful if it does not make it to the public in a timely manner in the form of an approved product that contributes to the reduction in suffering and death. Currently, the field is presented with a unique set of circumstances at the FDA. The Director of the National Cancer Institute, Dr. von Eschenbach, serves as the Acting Commissioner, and a new Office of Oncology Products has been formed under the leadership of Dr. Richard Pazdur. Let us hope that they, in cooperation with the National Cancer Institute, the pharmaceutical/biotechnology industry, and the academic community, can make the critical decisions that will result in an effective response to the growing scientific opportunities and the urgent needs of cancer patients.