Computed tomography (CT) is a widely utilised imaging technique in both clinical and industrial applications. CT scan results, presented as a volume revealing linear attenuation coefficients, are intricately influenced by scan parameters and the sample’s geometry and material composition. Accurately mapping these coefficients to specific materials is a complex task. Traditionally, material decomposition in CT relied on classical algorithms using handcrafted features based on X-ray physics. However, there is a rising trend towards data-driven approaches, particularly deep learning, which offer promising improvements in accuracy and efficiency. This survey explores the transition from classical to data-driven approaches in material-sensitive CT, examining a comprehensive corpus of literature identified through a detailed and reproducible search using Scopus. Our analysis addresses several key research questions: the origin and generation of training datasets, the models and architectures employed, the extent to which deep learning methods reduce the need for domain-specific expertise, and the hardware requirements for training these models. We explore the implications of these findings on the integration of deep learning into CT practices and the potential reduction in the necessity for extensive domain knowledge. In conclusion, this survey highlights a significant shift towards deep learning in material-resolving CT and discusses the challenges and opportunities this presents. The transition suggests a future where data-driven approaches may dominate, offering enhanced precision and robustness in material-resolving CT while potentially transforming the role of domain experts in the field.