BackgroundConvolutional neural networks (CNN) have achieved remarkable success in medical image analysis. However, unlike some general‐domain tasks where model accuracy is paramount, medical applications demand both accuracy and explainability due to the high stakes affecting patients' lives. Based on model explanations, clinicians can evaluate the diagnostic decisions suggested by CNN. Nevertheless, prior explainable artificial intelligence methods treat medical image tasks akin to general vision tasks, following end‐to‐end paradigms to generate explanations and frequently overlooking crucial clinical domain knowledge.MethodsWe propose a plug‐and‐play module that explicitly integrates anatomic boundary information into the explanation process for CNN‐based thoracopathy classifiers. To generate the anatomic boundary of the lung parenchyma, we utilize a lung segmentation model developed on external public datasets and deploy it on the unseen target dataset to constrain model explanations within the lung parenchyma for the clinical task of thoracopathy classification.ResultsAssessed by the intersection over union and dice similarity coefficient between model‐extracted explanations and expert‐annotated lesion areas, our method consistently outperformed the baseline devoid of clinical domain knowledge in 71 out of 72 scenarios, encompassing 3 CNN architectures (VGG‐11, ResNet‐18, and AlexNet), 2 classification settings (binary and multi‐label), 3 explanation methods (Saliency Map, Grad‐CAM, and Integrated Gradients), and 4 co‐occurred thoracic diseases (Atelectasis, Fracture, Mass, and Pneumothorax).ConclusionsWe underscore the effectiveness of leveraging radiology knowledge in improving model explanations for CNN and envisage that it could inspire future efforts to integrate clinical domain knowledge into medical image analysis.