In this work, classification of normal and abnormal human femur bone images are carried out using Support Vector Machines (SVM) and AdaBoost classifiers. The trabecular (soft bone) regions of human femur bone images (N = 44) recorded under standard conditions are used for the study. The acquired images are subjected to auto threshold binarization algorithm to recognize the presence of mineralization and trabecular structures in the digitized images. The mechanical strength regions such as primary compressive and tensile are delineated by semi-automated image processing methods from the digitized femur bone images. The first and higher order statistical parameters are calculated from the intensity values of the delineated regions of interest and their gray level co-occurrence matrices respectively. The significant parameters are found using principal component analysis. The first two most significant parameters are used as input to the classifiers. Statistical classification tools such as SVM and AdaBoost are employed for the classification. Results show that the AdaBoost classifier performs better in terms of sensitivity and specificity for the chosen parameters for primary compressive and tensile regions compared to SVM.
Emotion recognition is important in human communication and to achieve a complete interaction between humans and machines. In medical applications, emotion recognition is used to assist the children with Autism Spectrum Disorder (ASD to improve their socio-emotional communication, helps doctors with diagnosis of diseases such as depression and dementia and also helps the caretakers of older patients to monitor their well-being. This paper discusses the application of feature level fusion of speech and facial expressions of different emotions such as neutral, happy, sad, angry, surprise, fearful and disgust. Also, to explore how best to build the deep learning networks to classify the emotions independently and jointly from these two modalities. VGG-model is utilized to extract features from facial images, and spectral features are extracted from speech signals. Further, feature level fusion technique is adopted to fuse the features extracted from the two modalities. Principal Component Analysis (PCA is implemented to choose the significant features. The proposed method achieved a maximum score of 90% on training set and 82% on validation set. The recognition rate in case of multimodal data improved greatly when compared to unimodal system. The multimodal system gave an improvement of 9% compared to the performance of the system based on speech. Thus, result shows that the proposed Multimodal Emotion Recognition (MER outperform the unimodal emotion recognition system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.