COVID 19 has spread rapidly around the world due to the lack of a suitable vaccine; therefore the early prediction of those infected with this virus is extremely important attempting to control it by quarantining the infected people and giving them possible medical attention to limit its spread. This work suggests a model for predicting the COVID 19 virus using feature selection techniques. The proposed model consists of three stages which include the preprocessing stage, the features selection stage, and the classification stage. This work uses a data set consists of 8571 records, with forty features for patients from different countries. Two feature selection techniques are used in order to select the best features that affect the prediction of the proposed model. These are the Recursive Feature Elimination (RFE) as wrapper feature selection and the Extra Tree Classifier (ETC) as embedded feature selection. Two classification methods are applied for classifying the features vectors which include the Naïve Bayesian method and Restricted Boltzmann Machine (RBM) method. The results were 56.181%, 97.906% respectively when classifying all features and 66.329%, 99.924% respectively when classifying the best ten features using features selection techniques.
Emotion recognition has important applications in human-computer interaction. Various sources such as facial expressions and speech have been considered for interpreting human emotions.The aim of this paper is to develop an emotion recognition system from facial expressions and speech using a hybrid of machine-learning algorithms in order to enhance the overall performance of human computer communication.For facial emotion recognition, a deep convolutional neural network is used for feature extraction and classification, whereas for speech emotion recognition, the zero-crossing rate, mean, standard deviation and mel frequency cepstral coefficient features are extracted. The extracted features are then fed to a random forest classifier.In addition, a bi-modal system for recognising emotions from facial expressions and speech signals is presented. This is important since one modality may not provide sufficient information or may not be available for any reason beyond operator control. To perform this, decision-level fusion is performed using a novel way for weighting according to the proportions of facial and speech impressions. The results show an average accuracy of 93.22 %.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.