Emotions detection in natural languages is very effective in analyzing the user's mood about a concerned product, news, topic, and so on. However, it is really a challenging task to extract important features from a burst of raw social text, as emotions are subjective with limited fuzzy boundaries. These subjective features can be conveyed in various perceptions and terminologies. In this article, we proposed an IoT-based framework for emotions classification of tweets using a hybrid approach of Term Frequency Inverse Document Frequency (TFIDF) and deep learning model. First, the raw tweets are filtered using the tokenization method for capturing useful features without noisy information. Second, the TFIDF statistical technique is applied to estimate the importance of features locally as well as globally. Third, the Adaptive Synthetic (ADASYN) class balancing technique is applied to solve the imbalance class issue among different classes of emotions. Finally, a deep learning model is designed to predict the emotions with dynamic epoch curves. The proposed methodology is analyzed on two different Twitter emotions datasets. The dynamic epoch curves are shown to show the behavior of test and train data points. It is proved that this methodology outperformed the popular state-of-the-art methods.
Emotions detection in social media is very effective to measure the mood of people about a specific topic, news, or product. It has a wide range of applications, including identifying psychological conditions such as anxiety or depression in users. However, it is a challenging task to distinguish useful emotions’ features from a large corpus of text because emotions are subjective, with limited fuzzy boundaries that may be expressed in different terminologies and perceptions. To tackle this issue, this paper presents a hybrid approach of deep learning based on TensorFlow with Keras for emotions detection on a large scale of imbalanced tweets’ data. First, preprocessing steps are used to get useful features from raw tweets without noisy data. Second, the entropy weighting method is used to compute the importance of each feature. Third, class balancer is applied to balance each class. Fourth, Principal Component Analysis (PCA) is applied to transform high correlated features into normalized forms. Finally, the TensorFlow based deep learning with Keras algorithm is proposed to predict high-quality features for emotions classification. The proposed methodology is analyzed on a dataset of 1,600,000 tweets collected from the website ‘kaggle’. Comparison is made of the proposed approach with other state of the art techniques on different training ratios. It is proved that the proposed approach outperformed among other techniques.
The amount of time and money required to finish a software project and distribute the final product increases when there are bugs in the programme. Software procedures like defect monitoring and repair may be both costly and time-consuming to complete. Because it is difficult to locate and correct every defect in a product, it is essential that the negative effect of those defects be minimised in order to provide a result that is of better overall quality. The process of identifying troublesome sections of software code is known as software defect prediction. This paper presents an optimized machine learningenabled model for software fault prediction to improve software quality. PC1 data set is fed as input data in this model. Important features are selected by ant colony optimization (ACO) technique. Selected features are fed as input to support vector machine. Training and testing of SVM is performed by PC1 data set. Performance of ACO SVM Ant Colony Optimization Support Vector Machine is compared with SVM, Naive Bayes classifier and K-Nearest Neighbour classifier. The performance of ACO-based SVM is better for software fault classification and prediction.
According to the World Health Organization, heart disease is the biggest cause of death worldwide. It may be possible to bring down the overall death rate of individuals if cardiovascular disease can be detected in its earlier stages. If the cardiac disease is detected at an earlier stage, there is a greater possibility that it may be successfully treated and managed under the guidance of a physician. Recent advances in areas such as the Internet of Things, cloud storage, and machine learning have given rise to renewed optimism over the capacity of technology to bring about a paradigm change on a global scale. At the bedside, the use of sensors to capture vital signs has grown increasingly commonplace in recent years. Patients are manually monitored using a monitor located at the patient’s bedside; there is no automatic data processing taking place. These results, which came from an investigation of cardiovascular disease carried out across a large number of hospitals, have been used in the development of a protocol for the early, automated, and intelligent identification of heart disorders. The PASCAL data set is prepared by collecting data from different hospitals using the digital stethoscope. This data set is publicly available, and it is used by many researchers around the world in experimental work. The proposed strategy for doing research includes three steps. The first stage is known as the data collection phase, the data is collected using biosensors and IoT devices through wireless sensor networks. In the second step, all of the information pertaining to healthcare is uploaded to the cloud so that it may be analyzed. The last step in the process is training the model using data taken from already-existing medical records. Deep learning strategies are used in order to classify the sound that is produced by the heart. The deep CNN algorithm is used for sound feature extraction and classification. The PASCAL data set is essential to the functioning of the experimental environment. The deep CNN model is performing most accurately.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.