I appreciate Drs. Yassin, Weil, and Lockhart (the authors) giving me the opportunity to respond to their article. I understand the source of the National Cancer Institute's concern. The NCI has many multicenter trials in which local sites submit tumor samples, which are then reviewed centrally by expert pathologists. Some of these central reviews reveal that the original diagnosis was incorrect and that the real disease requires radically different therapy. In such cases, returning the result to the patient and her treating physician is emotionally and as I will argue below, ethically compelling.
The “Smart Filter” these authors propose to decide what research results can and should be returned may work for the specific scenario they confront, but the problems they and many others face are (i) taking into account all the complexity that the issue of returning individual research results raises and (ii) drawing defensible lines. The authors appropriately signal the widely diverse ways that biospecimens (and the clinical information that is absolutely necessary for research) enter the research arena and the tremendous differences in the relationships between patients, biobanks, and investigators. But in the end, the only factor that matters in their Smart Filter for deciding when to return results is whether the patient agreed to receive research results. This is a bit disquieting because they appropriately discuss the difficulty of obtaining meaningful informed consent for return of research results, particularly those results obtained long after samples and data were obtained by distant investigators. Oddly, one of their major responses to the challenges of obtaining truly informed consent seems to be that investigators should not always honor prior decisions not to receive “highly significant medical information” because people “cannot make a truly informed decision at enrollment to decline research results” that were obtained later. The idea that people should be given subsequently discovered research results even if they have opted out shows that disclosure sometimes is the goal, not honoring individual patient choices. In addition, they do not acknowledge that much research with biospecimens and clinical information currently is and for decades has been done without any formal consent at all.
The other 3 criteria in their Smart Filter—analytic validity, clinically significant, and clinically actionable—are the equivalent of motherhood, apple pie, and the American flag in this debate. To their credit, the authors are quick to point out how difficult the concepts of clinical significance and actionability are, but there is more to the story.
Here is where the authors' (and just about everyone else's) failure to acknowledge the difference between the clinic and research really emerges. Suppose a patient had a family history of an early onset, fatal disease and went to her physician to find out if she were at risk. A salient point here is that the patient initiated the request—she wanted to know. The request is not the end of the matter either. The ethical physician should not just answer the question, but should talk with her about why she wants to know and the potential implications of finding out her risk, both positive and negative, to assist the patient to make the choice that is best for her. Regardless of whether the patient ultimately decides to get the information she seeks, the physician has a commitment to the patient to stand by her through what follows.
By contrast, the vast majority of research results are not sought specifically by the patient/research participant. Here, the NCI's experience with the issue of mistaken diagnoses leads them to overgeneralize their analysis. In the NCI's case, their biobanks can be seen as part of the continuum of the patients' care. Mistaken diagnoses of the original cancer detected early on in the process of intake quality control (QC) are more analogous to the role of expert clinical review than the discovery science that follows QC. Indeed, the clinician–scientists who do the QC are often the same nationally known experts who review clinical specimens in response to specific requests. Returning information about mistaken diagnoses detected during initial QC, particularly when the mistakes are potentially life threatening, is ethically defensible because it is so much closer to clinical care than to research. But disease-oriented research biobanks represent a small part of biospecimen research.
The idea that the research enterprise more generally has a physician-like obligation of care to people from whom specimens were obtained is a dangerous one. It bolsters the therapeutic misconception that research participation should be expected to provide short-term individual benefit to research participants. Disclosing a wide array of results, which will likely become the norm given the difficulty of defining limiting criteria, will lead to a dramatic expansion of legal liability for the research enterprise. Questions about who has the responsibility to disclose, how to do so, and who should be liable for what are quite complex (1). Disclosure has few long-term consequences for investigators because they will rarely be the ones who walk the road with the patient; that responsibility will fall to others.
Proposals regarding return of research results are rarely subjected to the discipline of evidence-based medicine and comparative effectiveness so prominent in today's health care (2). More disclosure typically leads to many costs—finding and consulting with experts, undergoing diagnostic, and therapeutic interventions—every single one of which entails risks. In aggregate, these costs can exceed the benefits of providing information as the recent debates about PSA screening show (3, 4). It is wrong to ignore these downstream effects with the costs of health care going out of control.
Nor will disclosing even the most pressing results lead in every case to improved health outcomes. Clinicians know that people do not always want risk or diagnostic information—one need look only at the people who do not get recommended mammograms and colonoscopies even when they have ready access—and do not act in health promoting ways when they do get information. Thinking back to the patient who is enrolled in a collaborative oncology group, it is one thing to tell him that his initial diagnosis was incorrect—doubtless that news will be upsetting but the patient already knows he has cancer and is plugged into a system of care—and quite another to tell him that he has a risk of developing something else—that can seem irrelevant at best and like piling on at worst.
A note about the Clinical Laboratory Improvements Amendments (CLIA). I confess to some puzzlement at efforts by these authors and others to argue that CLIA does not apply. CLIA specifically exempts “[r]esearch laboratories that test human specimens but do not report patient specific results for the diagnosis, prevention or treatment of any disease or impairment of, or the assessment of the health of individual patients (5).” Yet the reason investigators want to return results is because they believe the recipients will act on the information. It is disingenuous to expect that recipients will always repeat the result in a CLIA approved laboratory. The fact that recipients are expected to take seriously and act on the results means that researchers who return research results from non-CLIA approved laboratories are not entitled to the exemption. The Center for Medicare and Medicaid Services (CMS), the legal entity that administers CLIA, has twice said in large public meetings that CLIA does apply to anyone who returns research results to alter care. People who do not like what CMS says should challenge or change the law, not fool themselves by arguing that it does not apply.
In conclusion, returning individual research results is a Pandora's box from which many unwanted consequences can flow. Sadly, if that box is opened, the authors' proposals are not enough to protect patients and the clinical and research enterprises.
See the related Point article, p. 256
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed.
The author thanks Amy McGuire and Jay Clayton for their helpful comments on earlier drafts of this essay.
This work was supported in part by 5UL1 RR024975-03 and 1U01 HG006378-01.