International audienceDepression is a major cause of disability world-wide. The present paper reports on the results of our participation to the depression sub-challenge of the sixth Audio/Visual Emotion Challenge (AVEC 2016), which was designed to compare feature modalities ( audio, visual, interview transcript-based) in gender-based and gender-independent modes using a variety of classification algorithms. In our approach, both high and low level features were assessed in each modality. Audio features were extracted from the low-level descriptors provided by the challenge organizers. Several visual features were extracted and assessed including dynamic characteristics of facial elements (using Landmark Motion History Histograms and Landmark Motion Magnitude), global head motion, and eye blinks. These features were combined with statistically derived features from pre-extracted features ( emotions, action units, gaze, and pose). Both speech rate and word-level semantic content were also evaluated. Classification results are reported using four different classification schemes: i) gender-based models for each individual modality, ii) the feature fusion model, ii) the decision fusion model, and iv) the posterior probability classification model. Proposed approaches outperforming the reference classification accuracy include the one utilizing statistical descriptors of low-level audio features. This approach achieved f1-scores of 0.59 for identifying depressed and 0.87 for identifying notdepressed individuals on the development set and 0.52/0.81, respectively for the test set
The challenge of modelling cancer presents a major opportunity to improve our ability to reduce mortality from malignant neoplasms, improve treatments and meet the demands associated with the individualization of care needs. This is the central motivation behind the ContraCancrum project. By developing integrated multi-scale cancer models, ContraCancrum is expected to contribute to the advancement of in silico oncology through the optimization of cancer treatment in the patient-individualized context by simulating the response to various therapeutic regimens. The aim of the present paper is to describe a novel paradigm for designing clinically driven multi-scale cancer modelling by bringing together basic science and information technology modules. In addition, the integration of the multi-scale tumour modelling components has led to novel concepts of personalized clinical decision support in the context of predictive oncology, as is also discussed in the paper. Since clinical adaptation is an inelastic prerequisite, a long-term clinical adaptation procedure of the models has been initiated for two tumour types, namely non-small cell lung cancer and glioblastoma multiforme; its current status is briefly summarized.
Glioma, especially glioblastoma, is a leading cause of brain cancer fatality involving highly invasive and neoplastic growth. Diffusive models of glioma growth use variations of the diffusion-reaction equation in order to simulate the invasive patterns of glioma cells by approximating the spatiotemporal change of glioma cell concentration. The most advanced diffusive models take into consideration the heterogeneous velocity of glioma in gray and white matter, by using two different discrete diffusion coefficients in these areas. Moreover, by using diffusion tensor imaging (DTI), they simulate the anisotropic migration of glioma cells, which is facilitated along white fibers, assuming diffusion tensors with different diffusion coefficients along each candidate direction of growth. Our study extends this concept by fully exploiting the proportions of white and gray matter extracted by normal brain atlases, rather than discretizing diffusion coefficients. Moreover, the proportions of white and gray matter, as well as the diffusion tensors, are extracted by the respective atlases; thus, no DTI processing is needed. Finally, we applied this novel glioma growth model on real data and the results indicate that prognostication rates can be improved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.