The last decades have seen rapid development of advanced machine learning (ML) models, considered a subarea of Artificial Intelligence (AI). Massive computing power and growing data availability have allowed the training of deep artificial neural networks, which learn themselves by finding task-relevant patterns in data. The size and complexity of these deep learning models grew over the years in pursuit of predictive performance. However, such black boxes prevent users from assessing whether the learned behaviour is in line with expectations and intentions. We argue in this thesis that the sole focus on predictive performance is an unsustainable trajectory, as a model can make right predictions for the wrong reasons. The research field of Explainable AI (XAI) addresses the black-box nature of ML models by generating explanations that present (aspects of) a model's behaviour in human-understandable terms. This thesis shows that explainability can be used to open up the whole machine learning pipeline: from understanding the underlying data, to assessing the quality of explanations and designing models that are interpretable by design. As such, we support the transition from oversight to insight. in-model uitlegbaarheid, zoals onze part-prototype modellen uit deel II, kunnen modellen worden 'heropgevoed' met onze gewenste normen, waarden en redeneringen. Op die manier kan kunstmatige intelligentie worden aangepast en aangevuld met menselijke intelligentie. Mensen in staat te stellen om ongewenst gedrag van een model te detecteren en te corrigeren draagt bij aan een effectief maar ook betrouwbaar en verantwoord gebruik van AI. v 12 Discussion and Reflection 12.