Thecommunication of research findings is one of our most important functions. Not communicating research findings can be harmful in many ways. Publication bias is a well-known phenomenon that affects cancer epidemiology, biomarker research, and cancer prevention. As these areas grow, along with the technologies, the risk of publication bias grows. To avoid the pitfall of publication bias and encourage research communication, Cancer Epidemiology, Biomarkers and Prevention will begin accepting papers under the category of“Null Results in Brief.” Other methods for reducing publication bias have been tried, such as the use of online journals (1). Some investigators have suggested a two-stage review process in which the first stage includes review of a manuscript without data or discussion, and the reviewers decide whether the manuscript is worthy (2). Other attempts include registries for all clinical trials (3–5). Although there are some arguments against publishing null studies (5),these seem trivial compared with the arguments for publication, as detailed below.
How do we know that publication bias exists? The effect of publication bias in clinical studies has been well described. Publication bias can be assessed through systematic reviews and by examining funnel plots (6–10) or other complementary methods (11–13). Models have been proposed to estimate the number of unpublished studies (14). As many as 50% of studies may not be published in a particular area of research (15). Importantly, there is more than a 2-fold likelihood that statistically nonsignificant studies (null studies) will not be published or communicated (10, 15–19), whereas other factors such as clinical versus observational trials, sample size, source of funding, or multicenter versus single center do not consistently affect publication bias (19). In fact,smaller studies may be more commonly published and have greater effects in treatment outcomes and survival. Statistically significant studies are published more quickly (16). Some studies suggest that laboratory-based experimental studies also show a significant degree of publication bias (17).
Publication bias can cause detrimental effects on scientific progress with implications for human health. Decisions made about patient care,protection from hazards, and lifestyle recommendations are made based on consideration of the whole literature, not just a single study. Fundamental components for assessing causation, as put forth by Sir Bradford Austin-Hill (20), include consistency among studies in different populations and coherence with different types of studies. If only one-half of our scientific results are communicated,then scientific progress suffers. For example, a clinical treatment may be considered effective when reviewing literature that is subjected to publication bias, but this consideration can be found to be erroneous when all evidence is considered (3, 21). There is also similar evidence for publication bias in epidemiology and the overestimation of risks (22), such as for the case of health effects from environmental tobacco smoke (7, 23). Whereas some authors include unpublished data for meta-analysis, this is a suboptimal alternative because the data have not been subject to peer-review or public comment (24). Reference bias, in which reviews selectively cite mostly statistically significant studies, also occurs (25).
Publication bias is typically caused by investigators who do not submit their research for publication, rather than rejection by journals (18). Some reasons include a lack of enthusiasm by overcommitted investigators and the consequential drive to publish only the statistically significant studies, or a feeling that null papers are typically given low publication priority scores. One has to wonder whether the publication of null studies happens more commonly from junior investigators who must publish to become known, rather than from busier senior investigators who are less intrigued by null findings. However, journals also contribute to publication bias when they refuse to publish null studies.
The possible inclusion of studies in a meta-analysis is not the only reason to avoid publication bias (5, 26). We owe it to our study subjects to publish the results of our studies because they provide us with their valuable time and body parts, often during times of stress, and trust that they are helping others by doing so. Failure to publish our studies violates that trust, and some consider it scientific misconduct (26). Separately, we owe our communication to individuals who donate their money to charitable organizations and to the taxpayers who fund our studies. Publication bias can lead to the formulation and testing of hypotheses based on false impressions from the scientific literature, wasting research opportunities, time, and money. This violates an implied contract from funders.
Does the publication of null studies hinder progress or muddy the field? In theory, no single report reduces the flow of information and progress. However, this depends on the quality of the study. If the publication contains preliminary data or is substantially underpowered(e.g., the odds ratio reported based on expected frequencies is too high to be believable), if the wrong population was studied(e.g., the levels of an exposure are not known or are below that which could be detected by a biomarker), or if the biomarker was not validated (e.g., it measures the wrong thing or does not provide consistent results), then these studies will indeed obscure reasonable conclusions. However, if the study does not suffer from these or other significant flaws and was based on reasonable biological hypotheses, then the data need to be communicated.
To reduce the likelihood of publication bias, Cancer Epidemiology, Biomarkers and Prevention will begin to publish null results in a specified format. The intent is to make the articles brief enough to encourage researchers to communicate their findings; that is,such articles will be limited to one journal page. However, the format will be sufficiently rigid to ensure that readers can understand the strengths and limitations of the study and how those results compare with other studies. “Null Results in Brief” papers must be original research articles of scientific merit that communicate the results of epidemiological or biomarker studies that support epidemiological studies and are used to test an a priorihypothesis. The hypothesis will be stated and must be worthy of merit to be studied, as determined by our editors and reviewers.
The “Materials and Methods” section should be well described so that the reader can understand what the investigators did and replicate it. If the study requires extensive method description, then it is not appropriate for “Null Results in Brief” papers. For example,it may not be prudent to use this format for the first epidemiological report of a study that includes inclusion and exclusion criteria, how controls were recruited, validation of recruitment methods, accrual rates, and so forth or for the first description of biomarker development and validation. Rather, the follow-up papers or papers that have biomarkers that have been validated elsewhere would be more appropriate, where those other papers can be cited.
The results should be brief but substantive and well described. There must be sufficient detail to allow the readers to formulate their own conclusions. In most cases, the results should be obvious, so that multiple analyses and models will not be of additional benefit. If there are new statistical methods or if multiple analyses are considered useful by the authors but cannot be described within the page limits, then this category of publication should not be used. The results should be considered worthy of publication; namely, they should advance the field of cancer research. Subset analyses without significant a priori hypotheses will not be considered.
Among the most important sections of the paper will be a statement of power. For the paper to be considered for publication, there must be sufficient statistical power to test the a priorihypothesis. For example, the authors should state the level of power to detect an odds ratio of 2.0 with the current sample size. The manuscript should also contain a statement of limitations particular to the study and not of epidemiology in general. If there is a particular source of bias, then this source should be stated. The conclusion section should be only a few sentences and should not discuss the results in the context of the literature. Consequently, the reference section will be brief, generally with less than five references.
Publication bias is a preventable problem. As cancer epidemiology includes more studies of germ-line and somatic genetics and, in turn,has an even greater impact on cancer prevention, systematic reviews and meta-analyses will become even more necessary. These are important tools that have advanced clinical medicine. The interpretation of tests and the use of chemopreventive agents must be based on the aggregate of all studies. It would be unfortunate if our conclusions and future research were based only on a biased subset or only on all completed studies.
To whom requests for reprints should be addressed, at Georgetown University Medical Center, Lombardi Cancer Center, Research Building W315, 3970 Reservoir Road NW,Washington, D.C. 20007.
Acknowledgments
I thank Drs. John Potter, Fred Kadlubar, Christine Ambrosone,and Jonine Bernstein for input regarding this editorial.