2018
DOI: 10.1016/j.cels.2018.05.017
|View full text |Cite
|
Sign up to set email alerts
|

Generalizable and Scalable Visualization of Single-Cell Data Using Neural Networks

Abstract: Visualization algorithms are fundamental tools for interpreting single-cell data. However, standard methods, such as t-stochastic neighbor embedding (t-SNE), are not scalable to datasets with millions of cells and the resulting visualizations cannot be generalized to analyze new datasets. Here we introduce net-SNE, a generalizable visualization approach that trains a neural network to learn a mapping function from high-dimensional single-cell gene-expression profiles to a low-dimensional visualization. We benc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 50 publications
(39 citation statements)
references
References 42 publications
0
39
0
Order By: Relevance
“…One potential advantage of this approach is that the 'most appropriate' perplexity does not need to grow with the sample size, as long as the mini-batch size remains constant. Parametric t-SNE has been recently applied to transcriptomic data under the names net-SNE (Cho et al, 2018) and scvis (Ding et al, 2018). The latter method combined parametric t-SNE with a variational autoencoder, and was claimed to yield more interpretable visualisations than standard t-SNE due to better preserving the global structure.…”
Section: Comparison To Related Workmentioning
confidence: 99%
“…One potential advantage of this approach is that the 'most appropriate' perplexity does not need to grow with the sample size, as long as the mini-batch size remains constant. Parametric t-SNE has been recently applied to transcriptomic data under the names net-SNE (Cho et al, 2018) and scvis (Ding et al, 2018). The latter method combined parametric t-SNE with a variational autoencoder, and was claimed to yield more interpretable visualisations than standard t-SNE due to better preserving the global structure.…”
Section: Comparison To Related Workmentioning
confidence: 99%
“…Neural networks provide a popular framework for machine learning algorithms which can be used to interpret complex datasets. As a result, neural networks have been widely used in many fields, including for the analysis of scRNA-seq data [2-5] . Since the output data from scRNA-seq is feature-enriched and well-structured, it is well suited as an input for neural networks.…”
Section: Introductionmentioning
confidence: 99%
“…Several attempts to successfully apply t-SNE-like methods to massive datasets have been recently reported including aforenoted HSNE 7,33,34 , LargeVis 10 and net-SNE 35 . However, these improved methods, when applied to large datasets, often require/benefit from considerable computational resources; for instance, the LargeVis study was performed on a 512Gb RAM, 32 core station.…”
Section: Discussionmentioning
confidence: 99%