The use of ionizing radiation for the treatment of cancer dates back to the late 19th century, remarkably soon after Roentgen described X-rays in 1895 and the use of brachytherapy after Marie and Pierre Curie discovered radium in 1898. These initial efforts stimulated a revolution of conceptual and technological innovations throughout the 20th century, forming the basis of the safe and effective therapies used today. Perhaps the most important of these developments has been the paradigm of fractionated dose delivery, technologic advances in X-ray production and delivery, improvements in imaging and computer-based treatment planning, and evolving models that predict how cancers behave and how they should be approached therapeutically. The radiobiological discoveries over the past century have likewise been revolutionary. There is now a tremendous body of knowledge about cancer biology and how radiation affects human tissue on the cellular level. Bringing these laboratory discoveries and techniques to the clinic is the key challenge.
Radiotherapy has become a standard treatment option for a wide range of malignancies. The U.S. Surveillance, Epidemiology, and End Results data show that radiation is commonly included in primary oncologic interventions. Between 1991 and 1996, for example, radiotherapy was used in the initial management of 32.9% of prostate cancers and of 44.1% of lung cancers in the United States (1). When subsequent palliative interventions are also considered, more than half of cancer patients require radiotherapy during at least one point in their care.
The history of radiotherapy is rich with colorful and important characters, and it would be difficult to recognize all of those individuals. Juan del Regato (2) has written an outstanding series of profiles on radiological oncologists, which we highly recommend to the interested reader. In this article, we will review some of the pioneers and critical advances that led to our current understanding of radiation effects on human tissue and to the present state of radiotherapy. We hope to convince readers that much of this history remains relevant to the challenges we are likely to face in the future.
Radiotherapy at the End of the Nineteenth Century
On the evening of November 8, 1895, Wilhelm Conrad Roentgen was working alone in his laboratory in Wurzburg, Germany. While studying cathode rays that emanated from evacuated glass tubes, he observed a new kind of ray that could penetrate through black cardboard but not through lead or platinum. He secretly labored over his experiments for the next 7 weeks, even requiring that his bed be moved into the lab. One night, he used these rays to record the shadow of his wife's hand and rings on a photographic plate, creating the now famous Roentgen photograph (Fig. 1). His initial article summarizing this discovery of X-rays was submitted to the Wurzburg Physico-Medical Society on December 28, 1895, and the fields of radiology and radiation oncology were born (3).
The first therapeutic uses of X-rays in cancer quickly followed this initial discovery (3, 4). The very earliest X-ray treatments were for benign conditions like eczema and lupus. Only 7 months after Roentgen's discovery, however, a 1896 issue of the Medical Record described a patient with gastric carcinoma who had benefited from radiotherapy delivered by Victor Despeignes in France. Émil Grubbé, a medical student in Chicago at the time, would later claim to have been the first to treat cancer patients with X-rays in 1896 (5). Thor Stenbeck and Tage Sjogen of Sweden reported successes with treating skin cancers by 1899. Palliation of painful tumors was reported as early as 1900 (6). The first deep-lying tumor to be eradicated by X-rays was probably a malignant sarcoma of the abdomen, treated over 1 1/2 years in New Haven by Clarence Skinner. Aside from famous cases such as these, however, most tumors around this time could not be cured without extensive normal tissue damage, given the low energies (and hence limited depths of penetration) of these early X-rays.
The field grew rapidly through the last years of the 19th century and into the first years of the 20th (3). Antoine-Henri Becquerel, a physics professor in Paris, was the first to recognize natural radioactivity while working with uranium salts. Shortly thereafter, Marie and Pierre Curie discovered radium and polonium; their stories have been nicely chronicled (7–9). The notion of using radioactive elements to treat cancer probably dates back to 1901, when Becquerel experienced a severe skin burn while accidentally carrying a tube of radium in his vest pocket for 14 continuous days. By 1902, radium had been used successfully treat a pharyngeal carcinoma in Vienna. By 1904, patients in New York were undergoing implantation of radium tubes directly into tumors, representing some of the first interstitial brachytherapy treatments. The Standard Chemical Company began commercially marketing radium from Colorado mines by 1913, and this “wonder drug” subsequently found its way into many products and applications.
The hazards of radioactivity were considerably slower to gain widespread recognition and acceptance, however. The literature is filled with unfortunate examples of inadvertent radiation toxicities, including the famous martyrdom of Marie Curie and her daughter Irene, both of whom died from secondary hematopoietic disorders. It seems that the rapid increase and enthusiasm for radium ultimately lead to its overpromise and overuse. This story serves to remind us of the large potential for disillusionment that accompanies overinflated expectations (10). This may be a particularly important historical chapter to reflect on as we celebrate some of the newer approaches, including the targeted therapies being developed today.
Advances in Perspective
Pioneering of fractionated radiotherapy. Within the first few decades of the 20th century, radiation was used to treat a variety of human malignancies including cancers of the breast and uterine cervix. These earliest treatments were generally delivered as single large exposures, by placing low-energy cathode ray tubes or radium-filled glass tubes in close proximity to tumors. Not surprisingly, tumors could only rarely be cured with these early approaches without incurring extensive normal tissue damage. These initial disappointments stimulated a revolution of conceptual and technological advances (Fig. 2).
A time-line of clinical, technologic, and biological advances in radiation oncology.
A time-line of clinical, technologic, and biological advances in radiation oncology.
In 1911 Claudius Regaud, an intern from Lyon, showed that a ram's testes could be sterilized without causing major burns to the scrotal skin, if three irradiations were delivered 15 days apart (11, 12). This series of landmark experiments established the basis of what would become the principle of fractionation for external beam radiotherapy (XRT). This result also inspired Regaud's work using slow, continuous low-dose rate (LDR) radium treatments, during a time when the tendency was for shorter, intense treatments. Throughout the first decades of the 20th century, he and his contemporaries at the Radium Institute of Paris helped develop various radium-based treatment strategies that served as alternatives to surgical resections. Some of their innovations, including intracavitary devices designed to treat uterine cervix tumors, bear remarkable similarity to modern brachytherapy applicators still in use today (2).
Henri Coutard joined Regaud at the Radium Institute of Paris, where he operated a basement X-ray unit. In the 1920s, Coutard applied the concept of fractionated XRT with treatment courses protracted over several weeks. Using this strategy, he was able to cure patients with a variety of head and neck malignancies and to popularize this concept of fractionation in the international community (2, 13–15). His methodical nature and keen observational skills led him to customize treatment intensities based on the levels of radiotherapy-induced skin desquamation and oral mucositis. This occurred during a time when dosimetry was extremely crude and unreliable. Coutard was among the first to recognize that different cancer histologies and locations carried distinct probabilities for radiocurability. His technologic advances included many concepts taken for granted today, including custom immobilization of patients, beam hardening with metallic filters to achieve higher photon energies, and collimation/shaping of beams. Physicians from around the world trained at the Radium Institute during this period, and by the 1930s, interest began shifting away from radium therapy and toward XRT (2).
Although fractionation by this time was believed to be important, it remained poorly understood and far from optimized. For this reason, treatment schedules varied widely. Between 1934 and 1941, using the work of Quimby (16), Magnus Strandqvist studied these time-dose relationships at the Radiuhemmet in Stockholm. He based his analyses on the level of radiation dermatitis observed in patients undergoing XRT for skin cancers. He was able to show straight iso-effect lines when these clinical observations were plotted on double logarithmic graphs, and he determined that these effects were governed primarily by total dose and overall treatment time (17). Building from Strandqvist's work, Ellis later suggested the Nominal Standard Dose formula to compare different treatment schedules based on total dose, number of fractions, and overall treatment time (18). More modern efforts have aimed to model the relationship of time-dose factors for individual tumor types and normal tissues. An iso-effect formula of this nature was developed by Withers and coworkers, based on the quadratic (β) and linear (α) components of radiation-induced cell kill for different cell types (19).
In the modern era, typical XRT schedules consist of daily fractions delivered over weeks or months, with each fraction consisting of a relatively small dose (generally 1.2–3 Gy). These fractionated schedules seem to amplify a small survival advantage that normal tissue has over tumor tissue when irradiated with small exposures. George Pack explained the rationale for fractionation as “differential recuperation of normal and neoplastic (cancer) tissues” (20). In the modern era, the generally accepted model explaining this effect consists of four independent processes that are thought to occur between fractions and favor the survival of normal tissues over cancers: (a) repair of sublethal cellular damage, (b) redistribution of tumor cells from radio-resistant (late S phase) into radio-sensitive (G2-M) portions of the cell cycle, (c) reoxygenation of the hypoxic (and hence radio resistant) portions of tumors, and (d) and migration of normal cells into irradiated areas to repopulate these normal tissues with healthy cells.
The risks of normal tissue complications after fractionated XRT have been examined often, and shown to depend on several factors including fraction size, tissue type, total radiation dose, and the portion of an organ that is irradiated (21). However, it is often challenging to deliver tumoricidal doses while respecting these normal tissue constraints. In attempts to overcome this problem, several altered fractionation schemes have been developed. Hyperfractionated treatment schedules generally use small fraction doses, increased numbers of fractions, and fractions delivered two or more times per day. Fractions are separated by at least 6 hours, based on the biological observation that most sublethal damage repair occurs within 6 hours. The hypothesis is that these schedules will maximally exploit the principles of fractionation, thus allowing for higher total doses (and hence higher control probabilities) while producing equivalent or reduced normal tissue complications (22). There are some successes with this approach, including the improved tumor control rates seen in The European Organization for Research and Treatment of Cancer comparison of standard fractionation and hyperfractionated regiments for head and neck cancers (23). The Radiation Therapy Oncology Group (RTOG) performed a similar trial, except that they also tested accelerated fractionation regiments. One hypothesis underlying accelerated treatment is that reductions in overall treatment time will reduce the ability of tumor cells to proliferate (including accelerated repopulation) during the course of treatment (22). The RTOG showed that both altered fractionation strategies can successfully improve tumor control rates, provided that planned treatment breaks are not used (24). In both of these trials, no differences were noted in late treatment-related complications between standard and altered fractionation arms; however, worse acute mucosal reactions were observed in altered fractionation arms of both trials.
In recent decades, this long-standing tradition of fractionation has been challenged in a few settings. Much of this challenge stems from newer approaches and devices capable of delivering highly targeted dose to tumors, while better avoiding the adjacent normal organs. Stereotactic radiosurgery, for example, is a single-day procedure for treating small intracranial lesions. The technique often makes use of a rigid frame temporarily attached to the patient's skull, providing a fixed coordinate system. A number of small beams are used to deliver a very highly conformal treatment plan with millimeter accuracy. These treatments are generally delivered as a single large fraction of 15 to 20 Gy, and this approach has been shown to be a safe and effective alternative to surgical resection for some lesions (25). Even more recently, improvements in imaging, patient immobilization, and delivery systems have allowed for similar approaches in extracranial sites. This has been termed stereotactic body radiotherapy (SBRT). Timmerman and colleagues (26) have used SBRT (3 fractions of 20 Gy) successfully to control early-stage lung cancers in patients who would not tolerate surgical resection. Similar approaches are gaining acceptance in treatment of other extracranial sites as well (27, 28).
These time-dose principles have also taken an interesting twist in recent decades with regard to brachytherapy delivery. Traditionally, intracavitary and interstitial brachytherapy treatments have followed the Regaud principle of slow continuous radiation delivery, termed low-dose rate (LDR). Such treatments deliver 0.4 to 2 Gy/hour over several days, thus requiring extended in-patient stays. In recent years, robotic afterloading machines have been developed to remotely transfer a radioactive source from a shielded vault into the intracavitary device, and then back into the vault. This automated design allows for the use of highly active sources that deliver repeated outpatient treatments using rapid dose delivery, termed high-dose rate (HDR). Based on the clinical and radiobiological insights gained over the past century, one would anticipate HDR to be inferior to LDR. This expectation stems from the notion that slow prolonged continuous radiation approximates a fractionated course of radiotherapy, and therefore, LDR would be expected to have a better therapeutic index. To the contrary, several randomized trials have compared HDR to LDR and have shown very similar outcomes for the two approaches. This has been the case for patients receiving interstitial brachytherapy for tongue cancers (29) or intracavitary brachytherapy for uterine cervix tumors (30–32). Perhaps with longer follow-up the LDR approach may prove to have fewer late normal tissue complications; however, the data available to date generally do not show this.
Technological advances in X-ray therapy. The depth to which X-rays can penetrate into biological tissue is related to the photon energies, and therefore, early radiotherapy was very limited by devices that could produce only low X-ray energies (∼100 keV). Stimulated by the early successes of X-ray therapy, many scientists turned their attention to producing higher energy X-rays that penetrate deeper into tissue. This was necessary to reduce dose deposition in skin, thus allowing for treatment of internal tumors without causing severe skin burns at points where beams enter the body.
By 1913 William Coolidge, an American physicist, was working with General Electric to develop hot-cathode X-ray tubes, which produced energies in the 200 keV range. Treatment with these tubes was initially termed deep roentgen therapy and later called orthovoltage XRT (33). A common technique for “hardening” beams was to filter out lower energies using thin sheets of metals. Another method for reducing skin doses was to use multiple beams that entered the body through different areas of skin and overlapped internally, or “crossfired” at tumor locations. The concept of rotating beams 360 degrees around tumors was also introduced as early as the 1920s, in an effort to maximize skin sparing (34). Despite these efforts, the low penetration of orthovoltage X-rays remained a considerable challenge.
To solve this problem, several groups worked simultaneously to develop what would later be called supervoltage X-ray beams. Coolidge in 1926 built a new “cascade” tube that was built in series to boost electron acceleration (35). Although initial designs of this had practical problems, General Electric later installed a 700 keV version at Memorial Hospital in New York. This unit was operated under the control of Gioacchino Failla and Edith Quimby (3). The two systematically compared the biological effects of supervoltage beams to orthovoltages and γ rays from radium (36). In 1929, Ernest Lawrence at the University of California at Berkeley began testing a somewhat different strategy, using high-frequency alternating potentials to accelerate particles. He and his graduate student, David Sloan, later reported in 1930 on a linear accelerator that consisted of 30 successive electrodes of this design (37). Sloan later went on to develop safer modifications of this high-energy design. These were followed by the invention of the electron accelerator (or betatron) devised by Donald Kerst in 1940 (38) and the synchrotron, which was developed by Veksler in the Soviet Union and by McMillan at Los Alamos Laboratory (39).
Clinical interest in XRT dimmed somewhat during World War II, but it rebounded quickly thereafter at Sanford University, in particular. Henry Kaplan was among the first to use 6 MV X-rays to treat patients beginning in 1956 (15). By the early 1960s, compact linear accelerators were being installed on gantries capable of rotating 360 degrees around patients. Another major development at this time was the production of teletherapy units that used cobalt-60 as their γ source, which produces radiation in the low MV range. The relatively low cost and availability of these units made them the favored source of supervoltage beams, and this popularity was a major stimulus for the subsequent growth of the field.
Much of the technological progress in recent decades has been due to computer and imaging advances. In particular, axial imaging methods and three-dimensional treatment planning systems have significantly improved upon our ability to deliver homogenous radiation dose in difficult anatomic locations and where there are unusual shapes. It is now fairly standard for radiation oncology centers to do computed tomography–based imaging for treatment planning. The use of other imaging modalities such as magnetic resonance imaging (MRI) and positron emission tomography (PET) scans has also become common for determining tumor volumes and normal organ anatomy. In prostate cancer, for example, MRI can better differentiate between the gland and the surrounding muscle. In lung cancers, PET scans can discriminate between hypermetabolic tumor and adjacent collapsed lung tissue.
Advances in our ability to shape radiation beams have also led to major treatment planning advances. Beam shaping was initially accomplished by custom-designed metal blocks mounted in the head of the treatment machines. Over the past decade, this technique has been replaced by multileaf collimators (MLCs), which consist of small metallic leaves located in the head of the linear accelerator. Each leaf within an MLC is robotically controlled and moves independently of the others to create computer-controlled beam shapes. Treatment delivery advances such as this have been combined with computer algorithms and software packages that optimize the number, shapes, and intensities of beams. This practice is now generally called three-dimensional conformal radiation therapy. An ideal three-dimensional treatment plan is both conformal (that is, high doses “wrap” closely around the target volume) and homogeneous (little variability of dose within the target). These three-dimensional conformal techniques now allow far more effective coverage of tumors, while better protecting adjacent normal organs.
In very basic X-ray therapies, the intensity of each treatment beam is essentially uniform across the beam width and length. A simple early tool to modify intensity was a wedge-shaped compensating filter (or wedge) placed in the machine head. Recent advances have led to complex methods that split the field aperture into smaller segments with varying beam-on times. An intricate form of intensity modulation achieves a checkerboard pattern across the length and width of each beam, such that each small segment delivers a different intensity. The delivery of this type of pattern requires moving the MLCs during the beam-on time. This is called intensity-modulated radiation therapy (IMRT) and this technique offers the potential for improved dose distributions in many clinical situations (40, 41).
The underlying concept of IMRT was described as early as 1978 (42), but the hardware and software needed to implement it was not widely available until the late 1990s. The beam intensity patterns of IMRT are so complex that a different type of treatment planning algorithm was designed, called inverse planning. With this method, the radiation oncologist prescribes the treatment dose for a target volume and defines the allowable doses that surrounding normal structures can tolerate. The computer then performs repeated iterations to optimize beam intensity profiles and desired dose distributions. To achieve these highly conformal distributions of high radiation doses in target volumes, IMRT treatment plans generally use a larger number of X-ray beams compared with three-dimensional conformal plans. Because of this, IMRT plans expose a larger volume of normal tissue to low levels of radiation. Some radiobiologists question whether these increases in low-dose exposures may elevate the risks for radiation-induced second malignancies after IMRT (43). Despite this theoretical risk, IMRT has been widely adopted for many anatomic treatment locations.
Although photon and electron beams are the primary modalities in radiation oncology, particle beams have also been developed for cancer treatment. Due to the unique energy absorption profile of protons in tissues, proton beam therapy is among the most highly conformal radiation modalities (44, 45). Much of its development resulted from collaborative work at the Harvard Cyclotron Laboratory and Massachusetts General Hospital between 1961 and 2002. Until recently, only a few facilities were capable of producing therapeutic proton beams due in part to the very large financial investment required. Smaller and more affordable cyclotrons are now in development, and if successful, this may result in the more widespread availability of proton-based radiotherapy in the coming decades.
Cellular and tumor responses to radiation. In 1905, while studying the histology of irradiated testes, Regaud recognized that undifferentiated cells (spermatagonia) were more radiosensitive than their more differentiated spermatozoid counterparts (46). One year later, Bergoinie and Tribondeau expressed this same concept somewhat differently in their law (47), which states that elevated radiosensitivity is seen in cells with higher mitotic activities and lower levels of established function (differentiation). Decades later in 1926, Muller performed a series of experiments showing that chromosomes are the major target of X-ray–mediated cell killing.
Puck was responsible for major developments that promoted the study of radiation effects on individual cells. Throughout the 1950s, he devised efficient methods for growing colonies from single human tumor cells that enabled clonogenic survival curves. Normal tissues were also subsequently studied using related clonogenic-based end points. Till and McCulloch developed a system whereby bone marrow cells were injected into lethally irradiated recipient animals, and resulting colonies formed in the animals' spleens. This allowed researchers to study the reproductive integrity of the injected cells after various treatments (48). Another assay developed by Withers evaluated the viability of the intestinal clonogenic mucosal cells after irradiation (49).
Many groups have used cell culture techniques to study the basis of radiation resistance on a descriptive level. For example, Weichselbaum and colleagues (50) irradiated cells derived from various tumors thought to be sensitive or resistant to radiation, but cell survival curve characteristics did not clearly correlate with clinical tumor behaviors. Further work showed potentially lethal damage repair (PLDR) to be among the most important factors for relating cell culture studies of human tumors to their clinical responses. Potentially lethal damage can be thought of as cellular injury damage that leads to cell death under some circumstances but not if conditions are modified to allow for repair. For example, osteogenic sarcoma tumors are generally thought to be resistant to radiation, and cells derived from these tumors have great capacity for PLDR after radiation (51).
Oxygen has also been shown to be a critically important determinant of cellular responsiveness to radiation. Early observations with model organisms such as Ascaris eggs and vegetable seeds suggested that low oxygen conditions promote resistance to irradiation. Mottram later showed in the 1920s that the presence of oxygen increases cell kill by ∼2.5- to 3.5-fold (52). The generally accepted mechanism is that oxygen “fixes” free radical–induced DNA damage into a more permanent state. More recent studies have shown that tumor hypoxia can also select for more aggressive tumor cells, including those with p53 mutations (53). Furthermore, hypoxia generates a broad range of signaling effects, including activation of the hypoxia-inducible transcription factor family of proteins that regulate a variety of downstream genes (54). Many investigators have attempted to exploit the oxygen effect with a variety of clinical modalities (55–57); this is a subject of continuing research.
Modern developments in cellular and molecular biology have since confirmed that DNA damage mediates many of the lethal cellular effects of radiotherapy. Although both single and double strand breaks (SSBs and DSB) are observed, DSBs that are thought to represent the principal lethal event (58). A 1-Gy X-ray dose results in ∼105 ionization events per cell, producing ∼1,000 to 2,000 SSBs and 40 DSBs (22, 59). Free radical scavenger agents, such as sulfhydryl-containing compounds, can counteract this oxidative damage. Such agents can render cells more radioresistant (60), reduce radiation-associated chromosome abnormalities (61), and reduce some radiation toxicities clinically (62).
Cells undergo a critical period after irradiation, which determines their fate: death, repaired damage, or continued growth and division without complete repair. On the molecular level, radiation damage initiates very complex signaling cascades that result in a variety of responses including cell cycle arrest, induction of stress response genes, DNA repair, and apoptosis. The signaling/surveillance proteins ATM and ATR have central roles in these responses (Fig. 3). They act via phosphorylation of several downstream targets including the chromatin protein histone H2AX (63). Over the last decade, we have witnessed a massive growth in knowledge regarding the molecular bases of these response/repair pathways, as detailed in our prior publications (64).
A simplified overview of some of the cellular pathways involved in response to ionizing radiation (modified from ref. 64).
A simplified overview of some of the cellular pathways involved in response to ionizing radiation (modified from ref. 64).
Although DNA damage is generally regarded to be the primary event leading to radiation-induced cell lethality, numerous non-DNA related mechanisms have recently been implicated in cellular responses to radiation. Two examples of this are radiation-induced ceramide production from plasma membrane–derived sphingomyelin and the activation of intracellular signaling pathways (65). Radiation-induced signaling of epidermal growth factor receptor (EGFR), for example, can have a prosurvival effect. Pharmacologic inhibition of this effect can potentially be achieved with anti-EGFR antibodies (66) or chemical inhibitors of EGFR tyrosine kinase activity (for example, gefitinib, and erlotinib).
Another important recent discovery is the bystander effect, an intercellular signaling pathway described by several authors whereby irradiated cells exert effects on neighboring untreated cells (67, 68). These responses in neighboring cells consist of broad cellular changes including gene activation, induction of genomic instability, differentiation, and changes in apoptotic potential. This occurs even when bystander cells are physically separated from the irradiated cells; therefore it seems to be mediated, at least in part, by diffusible substances (69). The bystander effect emphasizes the need to consider the entire tumor microenvironment with modern studies of radiation effects, rather than concentrating on individual cells. This notion has become particularly crucial, given that host stromal components within tumors are now known to contribute to radiation responsiveness also. Experiments in mice have implicated host-derived blood vessels as a target of radiotherapy, in terms of both tumor control (70) and complications in normal tissues (71).
Many of these advances in our understanding of radiation effects are now leading to newer thinking in the design of targeted therapeutics (72, 73). Inhibitors of angiogenesis are also being considered as potential radiation modulators (74, 75), after the initial discoveries of angiostatin and endostatin by Folkman's laboratory (76, 77). Although advances such as these are encouraging, truly tumor-selective radiosensitizing compounds remain somewhat elusive (78).
Modern conceptual innovations in radiotherapy. The earliest uses of radiotherapy in cancer management came as alternatives to surgical resection or as treatment for unresectable lesions. With the advances in diagnosis and improvements in treatment modalities, however, this paradigm has gradually shifted over the past century. Radiotherapy continues to have a role as monotherapy for some cancer presentations but is now more commonly used as a component of multimodality managements (Fig. 4).
Part of this conceptual evolution is related to the perceptions of how cancers arise and spread. From the late 1890s onward, most clinicians had accepted the notion that cancer arises in a local site and spreads via a centrifugal pattern. This theory was most clearly communicated by Halsted (79), and it predicts that cancers will spread primarily in a predictable, stepwise manner, from the primary tumor to the regional lymphatics and then systemically to distant sites. For many decades, this philosophy shaped oncologic treatments, which heavily concentrated on both the tumor and contiguous regional lymphatics. This thinking was eventually challenged by the suggestion that tumors arise as one of two types—either permanently localized or capable of metastasizing early. This model was championed by Fisher (80), and it suggested that metastasis is an early event that occurs even before detection of the primary mass. This philosophy stressed a perceived need for early systemic therapies, and it de-emphasized the importance of local control.
A third model was introduced in 1994 as the continuum or spectrum theory (81). This model proposes that patterns of cancer spread are more complex than first thought, that they exist as a continuum of disease proclivities. It emphasizes that cancer cells develop metastatic potential as tumors grow during their clinical evolution and that the process of malignant progression facilitates the capacity of cancer cells to metastasize. This philosophy predicts that local/regional control and systemic control are both important in the design of curative therapies. This view is supported by clinical studies: For example, a recent clinical trial randomized melanoma patients to receive surgical resection with margins of 1 cm compared with 3 cm. Not surprisingly, the wider surgical margins resulted in fewer local/regional recurrences; however, the 3-cm group of patients also experienced a trend toward fewer distant metastases and fewer melanoma-related deaths (82). Similar results were observed with early-stage non–small cell lung cancer; a trend toward lower survival rates was observed after limited lung resections compared with complete lobectomies (83).
In terms of radiotherapy, the spectrum theory is also supported by clinical outcome data. This was particularly true in trials of high-risk breast cancer, where women had mastectomy with or without radiotherapy to the chest wall and regional lymphatics. Even when chemotherapy was a part of the treatment, regional radiotherapy resulted in fewer distant metastases and improved survival rates (84, 85).
These data support the notion that control of local/regional disease remains important despite improvements in systemic therapies. We believe these data show that local/regional therapies are, in effect, “stopping metastases at their source” (86).
Another major conceptual advance for radiotherapy has been its use in treatments aimed at organ preservation (87, 88). Many effective chemoradiotherapy regiments have allowed patients to be spared morbid procedures such as laryngectomy for head and neck tumors (89) and abdominoperineal resection for anal carcinomas (90). Although it is possible that some these approaches may expose patients to slightly higher risk for tumor recurrence, most people believe that these small potential risks are offset by the large quality-of-life benefits.
When combined with radiotherapy, many common chemotherapeutic drugs have been shown to have additive tumor killing and in some situations actual synergy. It should be noted that chemotherapeutic drugs also usually increase treatment-related toxicity, so the net effect on therapeutic index can be variable. Platinum-based drugs are probably still the most commonly used agents in this setting. Radiotherapy elevates the cellular uptake of platinum drugs and increases the number of therapeutic platinum intermediates (91, 92). Several randomized clinical trials in advanced cervical cancer have shown a benefit to adding platinum-based chemotherapy to radiotherapy (93–95). In one such trial, the addition of cisplatin significantly improved tumor control and survival rates, although this came at the expense of elevated toxic effects (93). A similar trial with head and neck cancers showed that combinations of hyperfractioned radiotherapy and concurrent chemotherapy (cisplatin and fluorouracil) resulted in improved tumor control rates with comparable rates of complications to hyperfractioned radiotherapy alone (96).
Furthermore, several trials have shown the advantage of concurrent over sequential chemoradiotherapy, suggesting some degree of synergy in at least some situations (97).
Clearly, the past century has brought tremendous advances in surgical techniques, systemic therapies, and diagnostic/screening abilities. Indirectly, these improvements have had major effects on how radiotherapy is practiced. The progress in pediatric cancer treatments illustrates this nicely. Over the past 75 years, radiotherapy has been incorporated into the initial treatment of many childhood brain tumors that were once universally fatal. The problem of long-term central nervous system side effects remains a major obstacle for survivors (98), however, and many newer clinical trials are designed with this issue in mind. With medulloblastoma, for example, an ongoing Children's Oncology Group trial is taking maximal advantage of modern advances to reduce radiation toxicities. Advances in radiological technologies and the integration of concomitant chemotherapies have allowed for reductions in irradiated brain volumes (99), reductions in radiation dose (100), and better avoidance of critical normal organs (101, 102). These changes are likely to have major effect on quality-of-life for these children.
One final conceptual advance worthy of mention is the management of metastases, which account for the great majority of cancer deaths. For most of the last century, treatment for metastatic cancers has generally consisted of systemic drugs, with local therapies reserved for palliative needs. We proposed in 1995 that distinct oligometastatic states may exist wherein a limited number of metastases at certain anatomic sites may be potentially curable with local therapies (103). This idea is supported by surgical series that have produced cures after resection of limited metastases in the lung (104), liver (105), synchronous lung and liver (106), or adrenal gland (107). Our group has combined this concept with newer image-guided, highly targeted SBRT techniques to treat metastases occurring in multiple (five or fewer) sites or organs (Fig. 5). The ongoing clinical trial at the University of Chicago involves a dose escalation design to test the feasibility and efficacy of this approach (108).
Images from a 58-y-old man who had previously completed treatment of limited-stage small cell lung cancer. He subsequently represented with a solitary metastasis of the left adrenal gland. A, SBRT was used to deliver 30 Gy, administered in three 10 Gy fractions over a 2-wk period. Orange shaded area, the target volume; colored numbers, dose per fraction delivered to that particular volume. B, pretreatment FDG-PET images show the solitary metastasis (indicated with arrow). C, post–SBRT FDG-PET images show a complete response in terms of the tumor's metabolic activity. CNS, central nervous system.
Images from a 58-y-old man who had previously completed treatment of limited-stage small cell lung cancer. He subsequently represented with a solitary metastasis of the left adrenal gland. A, SBRT was used to deliver 30 Gy, administered in three 10 Gy fractions over a 2-wk period. Orange shaded area, the target volume; colored numbers, dose per fraction delivered to that particular volume. B, pretreatment FDG-PET images show the solitary metastasis (indicated with arrow). C, post–SBRT FDG-PET images show a complete response in terms of the tumor's metabolic activity. CNS, central nervous system.
Summary and Conclusions
The field of radiotherapy has undergone an amazing series of developments since its inception over a century ago. This is largely the result of dedicated and inventive pioneers, whose conceptual and technological advances created an entire specialty of one early observation of “a new type of ray.” Perhaps the most important of these developments has been the paradigm of fractionated dose delivery, technologic advances in X-ray production and delivery, improvement in imaging and computer-based treatment planning, and evolving models that predict how cancers behave and how they should be approached therapeutically.
The biological discoveries over the past century have likewise been revolutionary. There is now a tremendous body of knowledge about cancer biology and how radiation affects human tissue on the cellular level. In our opinion, however, the clinical practice of radiotherapy has thus far been influenced far less by these biological insights than by clinical advances and physics-based technologies. For example, methods for directly measuring DNA damage have yet to be applied clinically to assess radiation effects in tumor tissue or normal organs. And only recently have the extraordinary mechanistic discoveries regarding the molecular basis of DNA damage response and repair begun to affect clinical practice in meaningful ways.
Looking forward, we believe that bringing laboratory discoveries and techniques to the clinic will be the key challenge facing our specialty. We need to treat only those patients who can benefit from treatment and only to the extent that is necessary. Clinically applied molecular biology will make this possible.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed.