Recently, the outbreak of Coronavirus Disease 2019 has spread rapidly across the world. Due to the large number of infected patients and heavy labor for doctors, computer-aided diagnosis with machine learning algorithm is urgently needed, and could largely reduce the efforts of clinicians and accelerate the diagnosis process. Chest computed tomography (CT) has been recognized as an informative tool for diagnosis of the disease. In this study, we propose to conduct the diagnosis of COVID-19 with a series of features extracted from CT images. To fully explore multiple features describing CT images from different views, a unified latent representation is learned which can completely encode information from different aspects of features and is endowed with promising class structure for separability. Specifically, the completeness is guaranteed with a group of backward neural networks (each for one type of features), while by using class labels the representation is enforced to be compact within COVID-19/community-acquired pneumonia (CAP) and also a large margin is guaranteed between different types of pneumonia. In this way, our model can well avoid overfitting compared to the case of directly projecting highdimensional features into classes. Extensive experimental results show that the proposed method outperforms all comparison methods, and rather stable performances are observed when varying the number of training data.
In real-world applications, clustering or classification can usually be improved by fusing information from different views. Therefore, unsupervised representation learning on multi-view data becomes a compelling topic in machine learning. In this paper, we propose a novel and flexible unsupervised multi-view representation learning model termed Collaborative Multi-View Information Bottleneck Networks (CMIB-Nets), which comprehensively explores the common latent structure and the view-specific intrinsic information, and discards the superfluous information in the data significantly improving the generalization capability of the model. Specifically, our proposed model relies on the information bottleneck principle to integrate the shared representation among different views and the view-specific representation of each view, prompting the multi-view complete representation and flexibly balancing the complementarity and consistency among multiple views. We conduct extensive experiments (including clustering analysis, robustness experiment, and ablation study) on real-world datasets, which empirically show promising generalization ability and robustness compared to state-of-the-arts.
No abstract
Unsupervised representation learning on multi-view data (multiple types of features or modalities) becomes a compelling topic in machine learning. Most existing methods focus on directly projecting different views into a common space to explore the consistency across different views. Although simple, the underlying relationships among different views are not guaranteed during the learning process. In this paper, we propose a novel unsupervised multi-view representation learning model termed as Cross-View Equivariant Auto-Encoder (CVE-AE), which jointly conducts data reconstruction with view-specific autoencoder for information preservation within each view, and transformation reconstruction with transformation decoder for correlations preservation across different views. Accordingly, the generalization ability of our model is promoted due to the preserved intra-view intrinsic information and underlying inter-view relationships. We conduct extensive experiments on real-world datasets, and the proposed model achieves superior performance over stateof-the-art unsupervised representation learning methods.
To identify hygrothermal transfer patterns of exterior walls is a crucial issue in the design, assessment, and construction of buildings. Temperature and relative humidity, as sensor monitoring data, were collected from the outside of the wall to interior bamboo and wood composite sheathing over the year in Huangshan Mountain District, Anhui Province, China. Combining the machine learning method of reservoir computing (RC) with agglomerative hierarchical clustering (AHC), a novel clustering framework was built for better extraction of the characteristics of hygrothermal transfer on the time series data. The experimental results confirmed the hypothesis that the change in the temperature and relative humidity of the outside of the wall (RHT12) dominated the change of the interior sheathing (RHT11). The delay time between two adjacent peaks in temperature was 1 to 2 h, while that in relative humidity was 1 to 4 h from the outside of the wall to interior bamboo and wood composite sheathing. There was no significant difference in temperature peak delay time between April and July. Temperature peak delay time was 50 to 120 min. However, relative humidity peak delay time was 100 to 240 min in April, whereas it was 20 to 120 min in July. The impact formed a relatively linear relationship between outdoor temperature and relative humidity peak delay time. The hygrothermal transfer patterns were characterized effectively by the peak delays. The discovery of the hygrothermal transfer patterns for the bamboo and wood composite walls using the machine learning method will facilitate the development of energy-efficient and durable bamboo and wood composite wall materials and structures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.