Background and Purpose— Limited evidence exists on the effectiveness and safety of warfarin and all 4 available non-vitamin K antagonist oral anticoagulants (NOACs) from current clinical practice in the Asian population with nonvalvular atrial fibrillation. We aimed to evaluate the comparative effectiveness and safety of warfarin and 4 NOACs. Methods— We studied a retrospective nonrandomized observational cohort of oral anticoagulant naïve nonvalvular patients with atrial fibrillation treated with warfarin or NOACs (rivaroxaban, dabigatran, apixaban, or edoxaban) from January 2015 to December 2017, based on the Korean Health Insurance Review and Assessment database. For the comparisons, warfarin to 4 NOACs and NOAC to NOAC comparison cohorts were balanced using the inverse probability of treatment weighting. Ischemic stroke, intracranial hemorrhage, gastrointestinal bleeding, major bleeding, and a composite clinical outcome were evaluated. Results— A total of 116 804 patients were included (25 420 with warfarin, 35 965 with rivaroxaban, 17 745 with dabigatran, 22 177 with apixaban, and 15 496 with edoxaban). Compared with warfarin, all NOACs were associated with lower risks of ischemic stroke, intracranial hemorrhage, gastrointestinal bleeding, major bleeding, and composite outcome. Apixaban and edoxaban showed a lower rate of ischemic stroke compared with rivaroxaban and dabigatran. Apixaban, dabigatran, and edoxaban had a lower rate of gastrointestinal bleeding and major bleeding compared with rivaroxaban. The composite clinical outcome was nonsignificantly different for apixaban versus edoxaban. Conclusions— In this large contemporary nonrandomized Asian cohort, all 4 NOACs were associated with lower rates of ischemic stroke and major bleeding compared with warfarin. Differences in clinical outcomes between NOACs may give useful guidance for physicians to choose drugs to fit their particular patient clinical profile.
Cardiac disorders are critical and must be diagnosed in the early stage using routine auscultation examination with high precision. Cardiac auscultation is a technique to analyze and listen to heart sound using electronic stethoscope, an electronic stethoscope is a device which provides the digital recording of the heart sound called phonocardiogram (PCG). This PCG signal carries useful information about the functionality and status of the heart and hence several signal processing and machine learning technique can be applied to study and diagnose heart disorders. Based on PCG signal, the heart sound signal can be classified to two main categories i.e., normal and abnormal categories. We have created database of 5 categories of heart sound signal (PCG signals) from various sources which contains one normal and 4 are abnormal categories. This study proposes an improved, automatic classification algorithm for cardiac disorder by heart sound signal. We extract features from phonocardiogram signal and then process those features using machine learning techniques for classification. In features extraction, we have used Mel Frequency Cepstral Coefficient (MFCCs) and Discrete Wavelets Transform (DWT) features from the heart sound signal, and for learning and classification we have used support vector machine (SVM), deep neural network (DNN) and centroid displacement based k nearest neighbor. To improve the results and classification accuracy, we have combined MFCCs and DWT features for training and classification using SVM and DWT. From our experiments it has been clear that results can be greatly improved when Mel Frequency Cepstral Coefficient and Discrete Wavelets Transform features are fused together and used for classification via support vector machine, deep neural network and k-neareast neighbor(KNN). The methodology discussed in this paper can be used to diagnose heart disorders in patients up to 97% accuracy. The code and dataset can be accessed at “https://github.com/yaseen21khan/Classification-of-Heart-Sound-Signal-Using-Multiple-Features-/blob/master/README.md”.
Speech is the most significant mode of communication among human beings and a potential method for human-computer interaction (HCI) by using a microphone sensor. Quantifiable emotion recognition using these sensors from speech signals is an emerging area of research in HCI, which applies to multiple applications such as human-reboot interaction, virtual reality, behavior assessment, healthcare, and emergency call centers to determine the speaker's emotional state from an individual's speech. In this paper, we present major contributions for; (i) increasing the accuracy of speech emotion recognition (SER) compared to state of the art and (ii) reducing the computational complexity of the presented SER model. We propose an artificial intelligence-assisted deep stride convolutional neural network (DSCNN) architecture using the plain nets strategy to learn salient and discriminative features from spectrogram of speech signals that are enhanced in prior steps to perform better. Local hidden patterns are learned in convolutional layers with special strides to down-sample the feature maps rather than pooling layer and global discriminative features are learned in fully connected layers. A SoftMax classifier is used for the classification of emotions in speech. The proposed technique is evaluated on Interactive Emotional Dyadic Motion Capture (IEMOCAP) and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) datasets to improve accuracy by 7.85% and 4.5%, respectively, with the model size reduced by 34.5 MB. It proves the effectiveness and significance of the proposed SER technique and reveals its applicability in real-world applications.
Isotopic investigations from the Hercynian‐age fold belt between the Kazakhstan and Siberian cratons in the West Junggar region determine the timing of tectonic evolution and the petrogenesis of the granitic rocks of the region. Sphene from the leucogabbro phase of the Tangbale ophiolite melange, the oldest member of the ophiolite sequences in the fold belt, yields an isotopic Pb‐Pb age of 523.2±7.2 Ma. Zircon from a postcollision alkali granite yields slightly discordant isotopic U‐Pb ages that indicate magma crystallization at 321.4±6.7 Ma, dating it in the Lower Carboniferous period. Radiometric dating thus documents a time span of circa 200 Ma for igneous activity in the area. Petrogenetic studies were made to test whether Precambrian crustal rocks might underlie the Junggar sedimentary basin. Initial lead isotope ratios determined from potassium feldspars from five alkali granites show clear affinity with ratios from mid‐ocean ridge basalts in Pb isotope correlation diagrams. Sm‐Nd data from sphene and apatite from one of the granites yield an initial εNd(T)=+6.1. The granite sources are depleted mantle rocks of oceanic affinity that show no involvement of recycled aged granitic crustal rocks.
Emotional state recognition of a speaker is a difficult task for machine learning algorithms which plays an important role in the field of speech emotion recognition (SER). SER plays a significant role in many real-time applications such as human behavior assessment, human-robot interaction, virtual reality, and emergency centers to analyze the emotional state of speakers. Previous research in this field is mostly focused on handcrafted features and traditional convolutional neural network (CNN) models used to extract high-level features from speech spectrograms to increase the recognition accuracy and overall model cost complexity. In contrast, we introduce a novel framework for SER using a key sequence segment selection based on redial based function network (RBFN) similarity measurement in clusters. The selected sequence is converted into a spectrogram by applying the STFT algorithm and passed into the CNN model to extract the discriminative and salient features from the speech spectrogram. Furthermore, we normalize the CNN features to ensure precise recognition performance and feed them to the deep bi-directional long short-term memory (BiLSTM) to learn the temporal information for recognizing the final state of emotion. In the proposed technique, we process the key segments instead of the whole utterance to reduce the computational complexity of the overall model and normalize the CNN features before their actual processing, so that it can easily recognize the Spatio-temporal information. The proposed system is evaluated over different standard dataset including IEMOCAP, EMO-DB, and RAVDESS to improve the recognition accuracy and reduce the processing time of the model, respectively. The robustness and effectiveness of the suggested SER model is proved from the experimentations when compared to state-of-the-art SER methods with an achieve up to 72.25%, 85.57%, and 77.02% accuracy over IEMOCAP, EMO-DB, and RAVDESS dataset, respectively. INDEX TERMS Speech emotion recognition, deep bidirectional long shot term memory, key segment sequence selection, normalization of CNN features, radial-based function network (RBFN).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.