Objective: To identity osteoarthritis(OA) subtypes with gene expression of peripheral blood mononuclear cells. Methods:Gene expression data (GSE48556) of Genetics osteoARthritis and Progression (GARP) study was downloaded from Gene Expression Omnibus.Principal component analysis and unsupervised clustering were analyzed to identify subtypes of OA and compare major KEGG pathways and cell type enrichment using GSEA and xCell. Classification of subtypes were explored by the utilization of support vector machine.
Biometric signal based human-computer interface (HCI) has attracted increasing attention due to its wide application in healthcare, entertainment, neurocomputing, and so on. In recent years, deep learning based approaches have made great progress on biometric signal processing. However, the state-of-the-art (SOTA) approaches still suffer from model degradation across subjects or sessions. In this work, we propose a novel unsupervised domain adaptation approach for biometric signal based HCI via causal representation learning. Specifically, three kinds of interventions on biometric signals (i.e., subjects, sessions, and trials) can be selected to generalize deep models across the selected intervention. In the proposed approach, a generative model is trained for producing intervened features that are subsequently used for learning transferable and causal relations with three modes. Experiments on the EEG-based emotion recognition task and sEMG-based gesture recognition task are conducted to confirm the superiority of our approach. An improvement of +0.21% on the task of inter-subject EEG-based emotion recognition is achieved using our approach. Besides, on the task of inter-session sEMG-based gesture recognition, our approach achieves improvements of +1.47%, +3.36%, +1.71% and +1.01% on sEMG datasets including CSL-HDEMG, CapgMyo DB-b, 3DC and Ninapro DB6, respectively. The proposed approach also works on the task of inter-trial sEMG-based gesture recognition and an average improvement of +0.66% on Ninapro databases is achieved. These experimental results show the superiority of the proposed approach compared with the SOTA unsupervised domain adaptation methods on HCIs based on biometric signal.
Reconstructing three-dimensional (3D) objects from images has attracted increasing attention due to its wide applications in computer vision and robotic tasks. Despite the promising progress of recent deep learning–based approaches, which directly reconstruct the full 3D shape without considering the conceptual knowledge of the object categories, existing models have limited usage and usually create unrealistic shapes. 3D objects have multiple forms of representation, such as 3D volume, conceptual knowledge, and so on. In this work, we show that the conceptual knowledge for a category of objects, which represents objects as prototype volumes and is structured by graph, can enhance the 3D reconstruction pipeline. We propose a novel multimodal framework that explicitly combines graph-based conceptual knowledge with deep neural networks for 3D shape reconstruction from a single RGB image. Our approach represents conceptual knowledge of a specific category as a structure-based knowledge graph. Specifically, conceptual knowledge acts as visual priors and spatial relationships to assist the 3D reconstruction framework to create realistic 3D shapes with enhanced details. Our 3D reconstruction framework takes an image as input. It first predicts the conceptual knowledge of the object in the image, then generates a 3D object based on the input image and the predicted conceptual knowledge. The generated 3D object satisfies the following requirements: (1) it is consistent with the predicted graph in concept, and (2) consistent with the input image in geometry. Extensive experiments on public datasets (i.e., ShapeNet, Pix3D, and Pascal3D+) with 13 object categories show that (1) our method outperforms the state-of-the-art methods, (2) our prototype volume-based conceptual knowledge representation is more effective, and (3) our pipeline-agnostic approach can enhance the reconstruction quality of various 3D shape reconstruction pipelines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.