Background
Despite excellent prediction performance, noninterpretability has undermined the value of applying deep-learning algorithms in clinical practice. To overcome this limitation, attention mechanism has been introduced to clinical research as an explanatory modeling method. However, potential limitations of using this attractive method have not been clarified to clinical researchers. Furthermore, there has been a lack of introductory information explaining attention mechanisms to clinical researchers.
Objective
The aim of this study was to introduce the basic concepts and design approaches of attention mechanisms. In addition, we aimed to empirically assess the potential limitations of current attention mechanisms in terms of prediction and interpretability performance.
Methods
First, the basic concepts and several key considerations regarding attention mechanisms were identified. Second, four approaches to attention mechanisms were suggested according to a two-dimensional framework based on the degrees of freedom and uncertainty awareness. Third, the prediction performance, probability reliability, concentration of variable importance, consistency of attention results, and generalizability of attention results to conventional statistics were assessed in the diabetic classification modeling setting. Fourth, the potential limitations of attention mechanisms were considered.
Results
Prediction performance was very high for all models. Probability reliability was high in models with uncertainty awareness. Variable importance was concentrated in several variables when uncertainty awareness was not considered. The consistency of attention results was high when uncertainty awareness was considered. The generalizability of attention results to conventional statistics was poor regardless of the modeling approach.
Conclusions
The attention mechanism is an attractive technique with potential to be very promising in the future. However, it may not yet be desirable to rely on this method to assess variable importance in clinical settings. Therefore, along with theoretical studies enhancing attention mechanisms, more empirical studies investigating potential limitations should be encouraged.
BACKGROUND
Despite excellent prediction performance, non-interpretability has undermined the value of applying deep learning algorithms in clinical practice. To overcome this limitation, an explanatory modeling method called attention mechanism has been introduced to clinical research. However, gentle guidance and precautions for using this attractive method have not been well provided to clinical and informatics researchers. Furthermore, there has been a lack of discussion on the predictive and interpretive performance of this method when applied to health data.
OBJECTIVE
The purpose of this study is to provide clinical researchers with the basic concepts and design approaches of attention mechanisms. In addition, the study aims to evaluate current design approaches of attention mechanisms in terms of prediction and interpretability performance.
METHODS
First, the basic concepts and several key considerations regarding attention mechanisms are provided. Second, the four approaches to attention mechanisms are introduced according to a two-dimensional framework based on degree of freedom and uncertainty awareness. Third, 1) prediction performance, 2) probability reliability, 3) concentration of variable importance, 4) consistency of attention results, and 5) generalizability of attention results to conventional statistics, are assessed in the diabetic classification modeling setting. Fourth, the performances of the four attention design approaches are discussed.
RESULTS
Prediction performance was very high for all models. Probability reliability was high in models with a high degree of freedom. Variable importance was concentrated in several variables when uncertainty awareness was not considered. Consistency of attention results was high when uncertainty awareness was considered. The generalizability of attention results to conventional statistics was poor regardless of the modeling approach.
CONCLUSIONS
The attention mechanism is obviously an attractive technique, which could be very promising in the future. However, naive attention implementations may lead to poor results when determining variable importance. Therefore, more robust theoretical studies of attention mechanisms should be encouraged.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.