Artificial intelligence (AI) techniques are increasingly used for structural health monitoring (SHM) of polymer composite structures. However, to be confident in the trustworthiness of AI models, the models must be reliable, interpretable, and explainable. The use of explainable artificial intelligence (XAI) is critical to ensure that the AI model is transparent in the decision‐making process and that the predictions it provides can be trusted and understood by users. However, existing SHM methods for polymer composite structures lack explainability and transparency, and therefore reliable damage detection. Therefore, an interpretable deep learning model based on an explainable vision transformer (X‐ViT) is proposed for the SHM of composites, leading to improved repair planning, maintenance, and performance. The proposed approach has been validated on carbon fiber reinforced polymers (CFRP) composites with multiple health states. The X‐ViT model exhibited better damage detection performance compared to existing popular methods. Moreover, the X‐ViT approach effectively highlighted the area of interest related to the prediction of each health condition in composites through the patch attention aggregation process, emphasizing their influence on the decision‐making process. Consequently, integrating the ViT‐based explainable deep‐learning model into the SHM of polymer composites provided improved diagnostics along with increased transparency and reliability.Highlights
Autonomous damage detection of polymer composites using vision transformer based deep learning model.
Explainable artificial intelligence by highlighting region of interest using patch attention.
Comparison with the existing state of the art structural health monitoring methods.