Omid Farokhzad, MD, director of the Center for Nanomedicine at Brigham and Women's Hospital in Boston, MA, discusses the challenges of moving nanotherapies from the realm of academic innovation to the clinic, stressing the importance of thoroughly understanding the nano–bio interface and its complexities for successful translation.
The 2016 Nobel Prize in Chemistry went to three scientists who unearthed basic principles behind designing and synthesizing minuscule molecular machines 1,000 times smaller than a human hair's width. Their efforts helped set the stage for nanomedicine, now a burgeoning field.
“Nanomedicine is more than simple drug delivery vehicles. We're now developing theranostic systems capable of imaging and therapeutic response monitoring,” says Omid Farokhzad, MD, director of the Center for Nanomedicine at Brigham and Women's Hospital in Boston, MA. “My kids may see nanorobots implanted in the human body–to search for tumor cells, for instance–in their lifetime.” He spoke with Cancer Discovery's Alissa Poh about the challenges of moving tiny therapeutics from the realm of academic innovation to the clinic.
Why has clinical success for nanoparticles been elusive?
We don't know enough about the nano–bio interface, meaning how engineered nanomaterials interact, dynamically, with their surroundings. When a nanoparticle is exposed to a biologic fluid, a protein layer called the corona forms on its surface; it's as unique to that nanoparticle as my fingerprint is to me. The protein corona dictates pretty much everything about a nanoparticle, including its circulating half-life, biodistribution, perhaps even its fate within the tumor microenvironment. Yet we've grasped very little about the mechanisms involved–most protein corona studies are in vitro, not in vivo, and few have attained any sophisticated level of understanding.
How does a tumor's “leakiness,” or enhanced permeability and retention (EPR), affect delivery efficacy?
It is thought to aid nanoparticle accumulation, but EPR varies considerably across tumor types. Delivering a drug like crizotinib [Xalkori; Pfizer] in a nanoparticle, for example, would require not only ALK positivity as a biomarker, but also a way to identify patients with high EPR who would be good nanotherapy candidates. Imaging is one option; MRI can predictively measure EPR in mice. I think we'll eventually discover soluble plasma and histological biomarkers of EPR, which will further improve patient selection. Basically, biomarker-driven development is key in nanomedicine, for truly differentiated products that impact patient survival.
Is “convergence” research facilitating nanomedicine?
The notion that simply putting a cancer drug into a nanoparticle will improve efficacy has been clearly refuted in clinical trials, so we're starting to think beyond formulation and delivery. There are so many other nuances–tumor heterogeneity; the role of tumor-associated macrophages and stromal tissue, to name a few–that need to be considered.
To that end, many institutions now have centers for nanomedicine, with faculty from diverse backgrounds. My own lab includes chemists, physicists, biologists, engineers, and clinicians. These were pretty siloed specialties 10 to 15 years ago, but the communication gap is narrowing. We may not be fluent, but we understand each other's language.
Will it be possible to predictively design nanotherapies?
Even if each nanoparticle has unique characteristics, I think we'll see patterns emerging as our knowledge of the nano–bio interface grows. For example, using knockout models to study individual proteins that make up a corona, we'll learn their effects in that particular nanoparticle's context. We can create databases to house new information as it's acquired, and incorporate machine learning algorithms to predict a given nanotherapy's behavior. These algorithms will become more robust as predictions are cross-checked with reality. We need to broaden our input parameters, and I think we're just starting to fill those blanks.
Is scalability another hurdle for clinical translation?
It's where the rubber meets the road and many investigational nanotherapies ground to a halt. Say there's a five-step synthesis, with 90% reproducibility at each step; now your finished product is quite variable. How do you scale that and put together an acceptable Chemistry, Manufacturing and Control package, which the FDA requires? It's even more complex when moving from chemical to biological moieties–platelet membrane–coated nanoparticles, for instance. You have to account for how the blood is handled and how this might affect nanoparticle integration.
That said, from a manufacturing perspective, CAR T-cell therapy is orders of magnitude more complex than many nanotherapies I know, but because of its promising efficacy, people are figuring out how to address scalability issues. They'll do the same once nanotherapies begin to deliver real value.
Have any cancer nanotherapies shown similar efficacy?
Celator's Vyxeos [liposomal cytarabine plus daunorubicin] is the only one so far. In a phase III study, it improved overall survival of patients with acute myeloid leukemia, compared with unencapsulated drugs. I'm enthusiastic about these results; it's what our field has long needed. True success with any therapy in cancer means moving the needle on survival.
Some say nanomedicine is just hype, but I'd counter that we're still learning. Biologics started some 30 years ago, but its impact is much more recent, because our knowledge base had to deepen before the field was truly enabled. Similarly, for nanomedicine, this sharp slope of knowledge accumulation we're on will translate into more effective nanotherapies and better patient outcomes over time.
For more news on cancer research, visit Cancer Discovery online at http://cancerdiscovery.aacrjournals.org/content/early/by/section.