Phenotype and genotype markers of genetic susceptibility are of increasing interest in case-control studies of cancer. It is well established that bias to the odds ratio is caused by less-than-perfect assay sensitivity and specificity and varies with risk factor prevalence. As such, the observed variation in odds ratio between studies of genetic markers and cancer risk may be real, or may be attributed, in part, to variation in assay accuracy or in risk factor prevalence (e.g., prevalence differences between racial groups). The latter can be a particular concern when the prevalence of the "at risk" polymorphism in one or more populations is either very high (e.g., > 85%) or very low (e.g., < 15%). For example, even very high sensitivity (e.g., 98%) can produce substantial bias to the odds ratio when the risk factor prevalence is high. Under some prevalence conditions, however, assays with only moderate accuracy are sufficient and result in minimal bias to the odds ratio. Understanding misclassification in the context of marker prevalence may help to explain disparate findings in the literature and should assist investigators in selecting markers that are appropriate for future studies.

This content is only available via PDF.