2020
DOI: 10.1186/s13040-020-00222-x
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning-based ovarian cancer subtypes identification using multi-omics data

Abstract: Background: Identifying molecular subtypes of ovarian cancer is important. Compared to identify subtypes using single omics data, the multi-omics data analysis can utilize more information. Autoencoder has been widely used to construct lower dimensional representation for multi-omics feature integration. However, learning in the deep architectures in Autoencoder is difficult for achieving satisfied generalization performance. To solve this problem, we proposed a novel deep learning-based framework to robustly … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(32 citation statements)
references
References 27 publications
0
32
0
Order By: Relevance
“…For the same clinical task, a similar workflow has been adapted by researchers, but applying denoising AEs (DAEs) [ 91 ] instead [ 35 , 36 ]. By adding noise to the input \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$x$\end{document} , but not to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$x$\end{document} in the reconstruction loss (Equation 1 ), the DAE has to learn a reconstruction and also remove noise to approximate the uncorrupted vector x .…”
Section: Early Fusionmentioning
confidence: 99%
See 1 more Smart Citation
“…For the same clinical task, a similar workflow has been adapted by researchers, but applying denoising AEs (DAEs) [ 91 ] instead [ 35 , 36 ]. By adding noise to the input \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$x$\end{document} , but not to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$x$\end{document} in the reconstruction loss (Equation 1 ), the DAE has to learn a reconstruction and also remove noise to approximate the uncorrupted vector x .…”
Section: Early Fusionmentioning
confidence: 99%
“…The applications of early nonlinear fusion methods reviewed here have shown that these methods can outperform shallow methods on prediction tasks (e.g. [ 35 , 36 ]). This demonstrates that DL methods are viable alternatives to traditional methods even when sample sizes are comparatively low, because there were as low as 96 patients in the applications reviewed above [ 28 ].…”
Section: Early Fusionmentioning
confidence: 99%
“…Most of the studies utilized DL architectures for the analysis of imaging and genomic data with respect to risk prediction and stratification. Indicatively, in [64] , [65] , [66] , [67] , [68] , [69] DL models were trained to classify and detect disease subtypes based on images and genetic data. These data-driven approaches demonstrated the superiority of ML-based frameworks towards exploiting heterogeneous datasets with respect to improved diagnosis and treatment.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Another study by Elias and colleagues carried out neural network analysis on ncRNA data from EOCs to produce an algorithm to diagnose EOC [92]. A deep learning-based approach has also been used to analyze multi-omics data from ovarian cancers and identify subtypes [93].…”
Section: Figurementioning
confidence: 99%