The use of molybdenum as a quantitative paleo-atmosphere redox sensor is predicated on the assumption that Mo is hosted in sulfides in the upper continental crust (UCC). This assumption is tested here by determining the mineralogical hosts of Mo in typical Archean, Proterozoic, and Phanerozoic upper crustal igneous rocks, spanning a compositional range from basalt to granite. Common igneous sulfides such as pyrite and chalcopyrite contain very little Mo (commonly below detection limits of around 10 ng/g) and are not a significant crustal Mo host. By contrast, volcanic glass and Ti-bearing phases such as titanite, ilmenite, magnetite, and rutile contain significantly higher Mo concentrations (e.g., up to 40 µg/g in titanite), and can account for the whole-rock Mo budget in most rocks. However, mass balance between whole-rock and mineral data is not achieved in 4 out of 10 granites analyzed with in-situ methods, where Mo may be hosted in undetected trace molybdenite. Significant Mo depletion (i.e., UCC-normalized Mo/Ce < 1) occurs in nearly every granitic rock analyzed here, but not in oceanic basalts or their differentiates (Greaney et al., 2017; Jenner and O'Neill, 2012). On average, granites are missing $60% of their expected Mo contents. There are two possible reasons for this: (1) Mo partitions into an aqueous magmatic vapor/fluid phase that is expelled from cooling plutons, and/or (2) Mo is partitioned into titaniferous phases during partial melting and fractional crystallization of an evolving magma. The first scenario is likely given the high solubility of oxidized Mo. However, correlations between Mo/Ce and Nb/La in several plutonic suites suggest fractionating phases such as rutile or Fe-Ti oxides may sequester Mo in lower crustal rocks or in subducting slabs in arc settings.