Traumatic lesions on human skeletal remains are widely used for reconstructing past accidents or violent encounters and for comparing trauma prevalence across samples over time and space. However, uncertainties in trauma prevalence estimates increase proportionally with decreasing skeletal completeness, as once-present trauma might have gone missing. To account for this bias, samples are typically restricted to skeletal remains meeting a predefined minimum completeness threshold. However, the effect of this common practice on resulting estimates remains unexplored. Here, we test the performance of the conventional frequency approach, which considers only specimens with ≥ 75% completeness, against a recent alternative based on generalized linear models (GLMs), integrating specimen completeness as a covariate. Using a simulation framework grounded on empirical forensic, clinical, and archaeological data, we evaluated how closely frequency- and GLM-based estimates conform to the known trauma prevalence of once-complete cranial samples after introducing increasing levels of missing values. We show that GLM-based estimates were consistently more precise than frequencies across all levels of incompleteness and regardless of sample size. Unlike GLMs, frequencies increasingly produced incorrect relative patterns between samples and occasionally failed to produce estimates as incompleteness increased, particularly in smaller samples. Consequently, we generally recommend using GLMs and their extensions over frequencies, although neither approach is fully reliable when applied to largely incomplete samples.