2014
DOI: 10.1201/b17895
|View full text |Cite
|
Sign up to set email alerts
|

Introduction to High-Dimensional Statistics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
166
0
1

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 169 publications
(169 citation statements)
references
References 0 publications
2
166
0
1
Order By: Relevance
“…MDL and STACT have lower model fidelity and confidence when compared to AGSS and Heuristic, indicating they penalize model complexity too severely and terminate the splitting too early. This is in agreement with the fact that MDL and STACT assume a data size much larger than the number of model parameters [15] [24],…”
Section: Empirical Experimental Resultssupporting
confidence: 86%
“…MDL and STACT have lower model fidelity and confidence when compared to AGSS and Heuristic, indicating they penalize model complexity too severely and terminate the splitting too early. This is in agreement with the fact that MDL and STACT assume a data size much larger than the number of model parameters [15] [24],…”
Section: Empirical Experimental Resultssupporting
confidence: 86%
“…Estimating the parameters of Gaussian models (or GMM) in such high dimensional spaces is complex. When p is large, patches seen as points in R p are essentially isolated, the euclidean distance and the notion of nearest neighbor become much less reliable than in low dimensional spaces [9]. These phenomena, known as the curse of dimensionality, cause difficulties to decide which patches should be grouped together in a common Gaussian model.…”
Section: Inference In High Dimensionmentioning
confidence: 99%
“…Such forms of data are too much bigger in size and hence are called as high-dimensional data. Unfortunately, in such form of high-dimensional data, the distance and locations of the data points are more disperse and scattered in order to take the shape of sparsity [3][4] [5]. Therefore, clustering is the only alternative technique to solve such classification problem.…”
Section: Introduction (Heading 1)mentioning
confidence: 99%