COVID-19 heavily affects breathing and voice and causes symptoms that make patients’ voices distinctive, creating recognizable audio signatures. Initial studies have already suggested the potential of using voice as a screening solution. In this article we present a dataset of voice, cough and breathing audio recordings collected from individuals infected by SARS-CoV-2 virus, as well as non-infected subjects via large scale crowdsourced campaign. We describe preliminary results for detection of COVID-19 from cough patterns using standard acoustic features sets, wavelet scattering features and deep audio embeddings extracted from low-level feature representations (VGGish and OpenL3). Our models achieve accuracy of 88.52%, sensitivity of 88.75% and specificity of 90.87%, confirming the applicability of audio signatures to identify COVID-19 symptoms. We furthermore provide an in-depth analysis of the most informative acoustic features and try to elucidate the mechanisms that alter the acoustic characteristics of coughs of people with COVID-19.
How to visualize datasets hierarchically structured is a basic issue in information visualization. Compared to the common diagrams based on the nodes-links paradigm (e.g. trees), the enclosure-based methods have shown high potential to represent simultaneously the structure of the hierarchy and the weight of nodes. In addition, these methods often support scalability up to sizes where trees become very complicated to understand. Several approaches belong to this class of visualization methods such as treemaps, ellimaps, circular treemaps or Voronoi treemaps. This paper focuses on the specific case of ellimaps in which the nodes are represented by ellipses nested one into each other. A controlled experiment has previously shown that the initial version of the ellimaps was efficient to support the perception of the dataset structure and was reasonably acceptable for the perception of the node weights. However it suffers from a major drawback in terms of display space occupation. We have tackled this issue and the paper proposes a new algorithm to draw ellimaps. It is based on successive distortions and relocations of the ellipses in order to occupy a larger proportion of the display space than the initial algorithm. A Monte-Carlo simulation has been used to evaluate the filling ratio of the display space in this new approach. The results show a significant improvement of this factor.
This paper describes a system to support the visual exploration of Open Data. During his/her interactive experience with the graphics, the user can easily store the current complete state of the visualization application (called a viewpoint). Next, he/she can compose sequences of these viewpoints (called scenarios) that can easily be reloaded. This feature allows to keep traces of a former exploration process, which can be useful in single user (to support investigation carried out in multiple sessions) as well as in collaborative setting (to share points of interest identified in the data set).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.