Glioblastoma (known as glioblastoma multiforme) is one of the most aggressive brain malignancies, accounting for 48% of all primary brain tumors. For that reason, overall survival prediction plays a vital role in diagnosis and treatment planning for glioblastoma patients. The main target of our research is to demonstrate the effectiveness of features extracted from the combination of the whole tumor and enhancing tumor to the overall survival prediction. By the proposed method, there are two kinds of features, including shape radiomics and deep features, which is utilized for this task. Firstly, optimal shape radiomics features, consisting of sphericity, maximum 3D diameter, and surface area, are selected using the Cox proportional hazard model. Secondly, deep features are extracted by ResNet18 directly from magnetic resonance images. Finally, the combination of selected shape features, deep features, and clinical information fits the regression model for overall survival prediction. The proposed method achieves promising results, which obtained 57.1% and 97,531.8 for accuracy and mean squared error metrics, respectively. Furthermore, using selected features, the result on the mean squared error metric is slightly better than the competing methods. The experiments are conducted on the Brain Tumor Segmentation Challenge (BraTS) 2018 validation dataset.
The technical report presents our emotion recognition pipeline for high-dimensional emotion task (A-VB High) in The ACII Affective Vocal Bursts (A-VB) 2022 Workshop & Competition. Our proposed method contains three stages. Firstly, we extract the latent features from the raw audio signal and its Mel-spectrogram by self-supervised learning methods. Then, the features from the raw signal are fed to the self-relation attention and temporal awareness (SA-TA) module for learning the valuable information between these latent features. Finally, we concatenate all the features and utilize a fully-connected layer to predict each emotion's score. By empirical experiments, our proposed method achieves a mean concordance correlation coefficient (CCC) of 0.7295 on the test set, compared to 0.5686 on the baseline model. The code of our method is available at https://github.com/linhtd812/A-VB2022.
Speech emotion recognition (SER) is one of the most exciting topics many researchers have recently been involved in. Although much research has been conducted recently on this topic, emotion recognition via non-verbal speech (known as the vocal burst) is still sparse. The vocal burst is concise and has meaningless content, which is harder to deal with than verbal speech. Therefore, in this paper, we proposed a self-relation attention and temporal awareness (SRA-TA) module to tackle this problem with vocal bursts, which could capture the dependency in a long-term period and focus on the salient parts of the audio signal as well. Our proposed method contains three main stages. Firstly, the latent features are extracted using a self-supervised learning model from the raw audio signal and its Mel-spectrogram. After the SRA-TA module is utilized to capture the valuable information from latent features, all features are concatenated and fed into ten individual fully-connected layers to predict the scores of 10 emotions. Our proposed method achieves a mean concordance correlation coefficient (CCC) of 0.7295 on the test set, which achieves the first ranking of the high-dimensional emotion task in the 2022 ACII Affective Vocal Burst Workshop & Challenge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.