2016
DOI: 10.1007/s11634-016-0277-3
|View full text |Cite
|
Sign up to set email alerts
|

Rank-based classifiers for extremely high-dimensional gene expression data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 20 publications
1
8
0
Order By: Relevance
“…Strict protocols and specifications are needed to limit their influence a priori. Examples for existing data science methods to counteract noise effects are global normalization techniques [48], robustness procedures [8,42] or invariant models that are insensitive to data transformations [45]. Methods for aggregated data modalities Besides improving the analysis of single data modalities via additional samples and external domain knowledge, big data also brings up the challenge and promise of combining multiple data modalities (Fig.…”
Section: Data Sciencementioning
confidence: 99%
“…Strict protocols and specifications are needed to limit their influence a priori. Examples for existing data science methods to counteract noise effects are global normalization techniques [48], robustness procedures [8,42] or invariant models that are insensitive to data transformations [45]. Methods for aggregated data modalities Besides improving the analysis of single data modalities via additional samples and external domain knowledge, big data also brings up the challenge and promise of combining multiple data modalities (Fig.…”
Section: Data Sciencementioning
confidence: 99%
“…The efficacy of combining a large number of individual classifiers, also called base learners, has been well studied [10]- [16]. The main advantage of combining the results of many variants of the same classifier is that it leads to a reduction in the generalization error of the resultant ensemble classifier [11]- [13], [17], [18].…”
Section: Introductionmentioning
confidence: 99%
“…Various authors have suggested that combining weak models leads to efficient ensembles (Schapire 1990;Domingos 1996;Quinlan 1996;Maclin and Opitz 2011;Hothorn and Lausen 2003;Janitza et al 2015;Gul et al 2016b;Lausser et al 2016;Bolón-Canedo et al 2012;Bhardwaj et al 2016;Liberati et al 2017). Combining the outputs of multiple classifiers also reduces generalization error (Domingos 1996;Quinlan 1996;Bauer and Kohavi 1999;Maclin and Opitz 2011;Tzirakis and Tjortjis 2017).…”
Section: Introductionmentioning
confidence: 99%