2017
DOI: 10.1080/02664763.2017.1396296
|View full text |Cite
|
Sign up to set email alerts
|

Modeling with a large class of unimodal multivariate distributions

Abstract: In this paper we introduce a new class of multivariate unimodal distributions, motivated by Khintchine's representation. We start by proposing a univariate model, whose support covers all the unimodal distributions on the real line. The proposed class of unimodal distributions can be naturally extended to higher dimensions, by using the multivariate Gaussian copula. Under both univariate and multivariate settings, we provide MCMC algorithms to perform inference about the model parameters and predictive densiti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…The third choice consists of noise modeled as a unimodal distribution based on the Rayleigh distribution. More precisely, by employing the unimodal distributions representation derived by Khintchine (1938), and following the work of Paez and Walker (2018), we find that any unimodal density f Y (y) can be expressed as…”
Section: Error Distribution For Sampling Modelmentioning
confidence: 82%
“…The third choice consists of noise modeled as a unimodal distribution based on the Rayleigh distribution. More precisely, by employing the unimodal distributions representation derived by Khintchine (1938), and following the work of Paez and Walker (2018), we find that any unimodal density f Y (y) can be expressed as…”
Section: Error Distribution For Sampling Modelmentioning
confidence: 82%
“…Most common techniques include: i) the Silverman bandwidth test (Silverman 1981), which uses a normal kernel density estimate with increasing bandwidth; ii) the excess mass test proposed in (Müller and Sawitzki 1991); and iii) the Hartigan's Dip Test (Hartigan and Hartigan 1985), which has been widely adopted due to its low computational complexity, its high statistical power, and the absence of tuning parameter. The extension of these tests to multivariate distributions is not straightforward, and several definitions of unimodality have been suggested for the multidimensional setting (Paez and Walker 2018;Kouvaras and Kokolakis 2007). The Hartigan's SPAN and RUNT statistic (Hartigan and Mohanty 1992), and the MAP test (Rozál and Hartigan 1994) are often cited as multivariate alternatives, but these types of procedures are usually far more complex than univariate ones, both conceptually and computationally, relying for instance on the construction of several spamming trees (Siffer et al 2018).…”
Section: Refine: Improving the Location By Outliers Filteringmentioning
confidence: 99%
“…To cluster the data, a parametric model, such as the multivariate normal [Fraley & Raftery [2002]], uniform normal [Banfield & Raftery [1993]], multivariate t [Peel & McLachlan [2000]], or a skew variant of one of these distributions [Lee & McLachlan [2013]], is typically assumed to describe the component distributions F m . Non-parametric approaches are also possible: Li et al [2007] suppose the mixture is made up of non-parametric components and estimates them with Gaussian kernels; Rodríguez & Walker [2014] and Paez & Walker [2017] take a Bayesian approach to estimating mixture models based on unimodal distributions; Kosmidis & Karlis [2016] take a copula-based approach that influenced the model formulation used in this work. The EM algorithm of Dempster et al [1977] (or one of its extensions) is then used to find local maximizers of the likelihood for a fixed number of clusters g. The number of components g * can be determined in terms of the model by picking the g * that optimizes some criterion (such as BIC) over a user-specified range of g, and subsequently refined by merging clusters with, for example, the method of Baudry et al [2010] or Tantrum et al [2003].…”
Section: Introductionmentioning
confidence: 99%
“…SCAMP takes a perspective similar to Paez & Walker [2017]: clusters are defined in terms of unimodality. In many scientific contexts, unimodality of observations along a measurement coordinate of the data matrix X reflects physical homogeneity of interest.…”
Section: Introductionmentioning
confidence: 99%