In this study, heartbeat time series are classified using support vector machines (SVMs). Statistical methods and signal analysis techniques are used to extract features from the signals. The SVM classifier is favorably compared to other neural network-based classification approaches by performing leave-one-out cross validation. The performance of the SVM with respect to other state-of-the-art classifiers is also confirmed by the classification of signals presenting very low signal-to-noise ratio. Finally, the influence of the number of features to the classification rate was also investigated for two real datasets. The first dataset consists of long-term ECG recordings of young and elderly healthy subjects. The second dataset consists of long-term ECG recordings of normal subjects and subjects suffering from coronary artery disease.
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the kd-trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.
: A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. We call the new definition. Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. The definition proposed is almost free of parameters. The most common ones are the scale factor and the embedding dimension . In our definition, the scale factor is totally eliminated and the importance of is significantly reduced. The proposed method presents increased stability and discriminating power. After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.: A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. We call the new definition. Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. The definition proposed is almost free of parameters. The most common ones are the scale factor and the embedding dimension . In our definition, the scale factor is totally eliminated and the importance of is significantly reduced. The proposed method presents increased stability and discriminating power. After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates...
The aim of this work is to present an automated method that assists in the diagnosis of Alzheimer's disease and also supports the monitoring of the progression of the disease. The method is based on features extracted from the data acquired during an fMRI experiment. It consists of six stages: (a) preprocessing of fMRI data, (b) modeling of fMRI voxel time series using a Generalized Linear Model, (c) feature extraction from the fMRI data, (d) feature selection, (e) classification using classical and improved variations of the Random Forests algorithm and Support Vector Machines, and (f) conversion of the trees, of the Random Forest, to rules which have physical meaning. The method is evaluated using a dataset of 41 subjects. The results of the proposed method indicate the validity of the method in the diagnosis (accuracy 94%) and monitoring of the Alzheimer's disease (accuracy 97% and 99%).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.