Many maintenance decisions require the evaluation of alternative solutions in terms of complex maintenance criteria such as cost, repairability, reliability and availability requirements. Such problems can be formulated as multi-criteria decision making problems. The relative importance of maintenance criteria is difficult to be assessed, and therefore a sensitivity analysis becomes a necessity. The sensitivity analysis approach presented in this paper reveals some counter-intuitive results and can considerably enhance the application of decision analysis in complex maintenance management.
This chapter surveys and analyses visual methods of explainability of Machine Learning (ML) approaches with focus on moving from quasi-explanations that dominate in ML to actual domain-specific explanation supported by granular visuals. The importance of visual and granular methods to increase the interpretability and validity of the ML model has grown in recent years. Visuals have an appeal to human perception, which other methods do not. ML interpretation is fundamentally a human activity, not a machine activity. Thus, visual methods are more readily interpretable. Visual granularity is a natural way for efficient ML explanation. Understanding complex causal reasoning can be beyond human abilities without "downgrading" it to human perceptual and cognitive limits. The visual exploration of multidimensional data at different levels of granularity for knowledge discovery is a long-standing research focus. While multiple efficient methods for visual representation of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasiexplanations. This chapter starts with the motivation and the definitions of different forms of explainability and how these concepts and information granularity can integrate in ML. The chapter focuses on a clear distinction between quasi-explanations and actual domain specific explanations, as well as between potentially explainable and an actually explained ML model that are critically important for the further progress of the ML explainability domain. We discuss foundations of interpretability, overview visual interpretability and present several types of methods to visualize the ML models. Next, we present methods of visual discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). This family of methods take the critical step of creating visual explanations that are not merely quasi-explanations but are also domain specific visual explanations while these methods themselves are domain-agnostic. The chapter includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, pointto-point and point-to-graph GLC approaches, and real-world case studies. The chapter also covers traditional visual methods for understanding multiple ML models, which include deep learning and time series models. We illustrate that many of these methods are quasi-explanations and need further enhancement to become actual domain specific explanations. The chapter concludes with outlining open problems and current research frontiers.
This paper illustrates how a fuzzy logic approach can be used to formalize terms in the American College of Radiology (ACR) Breast Imaging Lexicon. In current practice, radiologists make a relatively subjective determination for many terms from the lexicon related to breast cancer diagnosis. Lobulation and microlobulation of nodules are two important features in the ACR lexicon. We offer an approach for formalizing the distinction of these features and also formalize the description of intermediate cases between lobulated and microlobulated masses. In this paper it is shown that fuzzy logic can be an effective tool in dealing with this kind of problem. The proposed formalization creates a basis for the next three steps: (i) extended verification with blinded comparison studies, (ii) the automatic extraction of the related primitives from the image, and (iii) the detection of lobulated and microlobulated masses based on these primitives. © 1997 Elsevier Science B.V.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.