2022
DOI: 10.1007/978-3-031-04749-7_23
|View full text |Cite
|
Sign up to set email alerts
|

CLMB: Deep Contrastive Learning for Robust Metagenomic Binning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 38 publications
0
11
0
Order By: Relevance
“…Based on the VAMB architecture, various methods have been developed for its extension or the use of other sources of information. First, the authors of CLMB [ 114 ] took into account the noise, rarely considered in metagenomic analysis. To do so, they simulated different types of noise, augmenting contig data with noised sequences.…”
Section: Resultsmentioning
confidence: 99%
“…Based on the VAMB architecture, various methods have been developed for its extension or the use of other sources of information. First, the authors of CLMB [ 114 ] took into account the noise, rarely considered in metagenomic analysis. To do so, they simulated different types of noise, augmenting contig data with noised sequences.…”
Section: Resultsmentioning
confidence: 99%
“…CL has been effectively applied in various domains such as self-supervised learning [Chen et al, 2020, Chuang et al, 2020], image classification [Khosla et al, 2020], gaze direction regression [Wang et al, 2022] and metagenomics [Zhang et al, 2022]. In this work, we adapt CL to compare different RNA-seq samples.…”
Section: Related Workmentioning
confidence: 99%
“…The proposed model employs convolutional layers with rectified linear unit (ReLU) activation functions, in addition to incorporating max-pooling layers inside the encoder portion. This choice of a convolutional autoencoder is made to improve computational complexity and overall performance [37]. The encoder component of the model employs a non-linear transformation to convert the input vector into a lower-dimensional hidden representation.…”
Section: Stacked Convolutional Autoencoder (Scae)mentioning
confidence: 99%