Atmospheric gravity waves (GWs) are important in driving the middle and upper atmosphere dynamics on Earth. Here, we provide a brief review of the most common techniques of retrieving gravity wave activity from observations. Retrieval of gravity wave activity from observations is a multi-step process. First, the background fields have to be removed as the retrieval of the wave activity highly depends on this. Second, since a broad spectrum of internal waves contributes to atmospheric fluctuations, the contribution of GWs has to be extracted carefully. We briefly discuss the strengths, limitations/barriers, and applications of each technique. We also outline some future research questions to improve the treatment of these wave extraction methods.
Machine learning (ML) and Artificial Intelligence (AI) are increasingly used in energy and engineering systems, but these models must be fair, unbiased, and explainable. In other words, it is crucial to have confidence in AI's trustworthiness. Machine learning techniques, like neural networks, have helped predict important parameters and improve model performance. But for these AI techniques to be useful for making decisions, they need to be audited, accounted for, and easy to understand. The use of Explainable AI (XAI) and interpretable machine learning (IML) is crucial for the accurate prediction of prognostics, such as remaining useful life (RUL) in a digital twin system to make it intelligent while ensuring that the AI model is transparent in its decision-making processes and that the predictions it generates can be understood and trusted by users. By using AI that is explainable, interpretable, and trustworthy, intelligent digital twin systems can make more accurate predictions of RUL, leading to better maintenance and repair planning and, ultimately, improved system performance. The objective of this paper is to understand the idea of XAI and interpretable ML (IML) and justify the important role of ML/AI in the Digital Twin framework and components, which requires XAI to understand the prediction better. This paper explains the importance of XAI and IML in both local and global aspects to ensure the use of trustworthy ML/AI applications for RUL prediction. This paper used the RUL prediction for the XAI and IML studies and leveraged the integrated python toolbox for interpretable machine learning (PiML)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.