Background. The modernization of traditional Chinese medicine (TCM) demands systematic data mining using medical records. However, this process is hindered by the fact that many TCM symptoms have the same meaning but different literal expressions (i.e., TCM synonymous symptoms). This problem can be solved by using natural language processing algorithms to construct a high-quality TCM symptom normalization model for normalizing TCM synonymous symptoms to unified literal expressions. Methods. Four types of TCM symptom normalization models, based on natural language processing, were constructed to find a high-quality one: (1) a text sequence generation model based on a bidirectional long short-term memory (Bi-LSTM) neural network with an encoder-decoder structure; (2) a text classification model based on a Bi-LSTM neural network and sigmoid function; (3) a text sequence generation model based on bidirectional encoder representation from transformers (BERT) with sequence-to-sequence training method of unified language model (BERT-UniLM); (4) a text classification model based on BERT and sigmoid function (BERT-Classification). The performance of the models was compared using four metrics: accuracy, recall, precision, and F1-score. Results. The BERT-Classification model outperformed the models based on Bi-LSTM and BERT-UniLM with respect to the four metrics. Conclusions. The BERT-Classification model has superior performance in normalizing expressions of TCM synonymous symptoms.
Abstract:Affective computing is an increasingly important outgrowth of Artificial Intelligence, which is intended to deal with rich and subjective human communication. In view of the complexity of affective expression, discriminative feature extraction and corresponding high-performance classifier selection are still a big challenge. Specific features/classifiers display different performance in different datasets. There has currently been no consensus in the literature that any expression feature or classifier is always good in all cases. Although the recently updated deep learning algorithm, which uses learning deep feature instead of manual construction, appears in the expression recognition research, the limitation of training samples is still an obstacle of practical application. In this paper, we aim to find an effective solution based on a fusion and association learning strategy with typical manual features and classifiers. Taking these typical features and classifiers in facial expression area as a basis, we fully analyse their fusion performance. Meanwhile, to emphasize the major attributions of affective computing, we select facial expression relative Action Units (AUs) as basic components. In addition, we employ association rules to mine the relationships between AUs and facial expressions. Based on a comprehensive analysis from different perspectives, we propose a novel facial expression recognition approach that uses multiple features and multiple classifiers embedded into a stacking framework based on AUs. Extensive experiments on two public datasets show that our proposed multi-layer fusion system based on optimal AUs weighting has gained dramatic improvements on facial expression recognition in comparison to an individual feature/classifier and some state-of-the-art methods, including the recent deep learning based expression recognition one.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.