We read Fonseca-Nunes and colleagues' recent systematic review and meta-analysis on iron and cancer risk in the January 2014 issue (1). Although the subject is timely and of great public health interest, the authors have failed to synthesize available research into a coherent body of evidence that informs practice. For instance, the review included only cohort and case–control studies without adequate explanation of why other study designs were excluded; experimental studies often have the highest level of methodologic rigor (2).
We also were concerned that the review only used a single database, PubMed. Searching a single source can reduce sensitivity to as low as 66% (3). A recent comparison of PubMed and Google Scholar found that Scholar's retrieval rate was double that of PubMed with similar precision (PubMed, 11%; Google Scholar, 22%; P < 0.001; ref. 4). We performed a search of EMBASE using the search strategy used by the authors and identified at least eight additional prospective studies that might have been eligible for the review (citations available upon request).
In addition, the authors have not addressed an important source of error in meta-analysis, inclusion of studies having substantial clinical and methodologic heterogeneity. When heterogeneity was identified and statistically confirmed, the researchers should then have conducted a post hoc subgroup and sensitivity analyses to explore the reasons for the heterogeneity (5). The absence of such an analysis raises serious questions about the conclusions of the review.
Finally, when we applied the checklist for Meta-analysis Of Observational Studies in Epidemiology (MOOSE; ref. 2) to the review, we found that 18 of the 33 elements were not reported in sufficient detail in the article (checklist available upon request). Items that were not reported included search strategy and assessment of quality. The importance of having this information is obvious because without it one cannot be certain of the review's robustness and applicability to the target audience.
Clinical decision making has benefited enormously from the availability of high-quality systematic reviews. Unfortunately, too few reviewers, as evidenced by this article, select and organize data in a way that will assist clinicians or researchers. Following guidelines from organizations like Cochrane, Campbell Collaboration, and others would have led to a more coherent review and conclusions. Among the most important recommendations are methodologies for assessing study quality and providing adequate information for decision makers to make judgments on the applicability and generalizability of the results.
See the Response, p. 1436
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed.