BackgroundExplainable artificial intelligence (XAI) emerged to improve the transparency of machine learning models and increase understanding of how models make actions and decisions. It helps to present complex models in a more digestible form from a human perspective. However, XAI is still in the development stage and must be used carefully in sensitive domains including paediatrics, where misuse might have adverse consequences.ObjectiveThis commentary paper discusses concerns and challenges related to implementation and interpretation of XAI methods, with the aim of rising awareness of the main concerns regarding their adoption in paediatrics.MethodsA comprehensive literature review was undertaken to explore the challenges of adopting XAI in paediatrics.ResultsAlthough XAI has several favorable outcomes, its implementation in paediatrics is prone to challenges including generalizability, trustworthiness, causality and intervention, and XAI evaluation.ConclusionPaediatrics is a very sensitive domain where consequences of misinterpreting AI outcomes might be very significant. XAI should be adopted carefully with focus on evaluating the outcomes primarily by including paediatricians in the loop, enriching the pipeline by injecting domain knowledge promoting a cross‐fertilization perspective aiming at filling the gaps still preventing its adoption.