2019
DOI: 10.3150/18-bej1037
|View full text |Cite
|
Sign up to set email alerts
|

A one-sample test for normality with kernel methods

Abstract: We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for normality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. Our method makes us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
19
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(20 citation statements)
references
References 28 publications
1
19
0
Order By: Relevance
“…Our paper fills an important gap in the literature by proposing the first set of kernel-based composite hypothesis tests applicable to a wide range of parametric models. These are in contrast to previously introduced composite tests which are limited to very specific parametric families (Kellner et al 2019;Fernandez et al 2020). To devise these new tests, we make use of recently developed minimum distance estimators based on the KSD (Barp et al 2019).…”
Section: Introductionmentioning
confidence: 87%
“…Our paper fills an important gap in the literature by proposing the first set of kernel-based composite hypothesis tests applicable to a wide range of parametric models. These are in contrast to previously introduced composite tests which are limited to very specific parametric families (Kellner et al 2019;Fernandez et al 2020). To devise these new tests, we make use of recently developed minimum distance estimators based on the KSD (Barp et al 2019).…”
Section: Introductionmentioning
confidence: 87%
“…Here we assume μ 1 = 1, σ 1 = 1 and σ 2 = 1 and analyze the power of the test by changing the μ 2 parameter. As it is explained in the Appendix, there is one to one correspondence between the μ 2 parameter and the excess kurtosis of the mixed Gaussian distribution (see formula (16)). Therefore, in Fig 8 we present the power of the test with respect to the excess kurtosis and compare it with the power of the common tests for normality.…”
Section: Plos Onementioning
confidence: 94%
“…There have been also attempts in the literature to provide a one-sample statistical test of normality for data in a broader setting like in a general Hilbert space [16]. Despite this verity, there have been continuing efforts to develop tests for the departure of a random sample from normality that could be considered omnibus, [17,18,19,20,21,22], that is, to be able to reject the null hypothesis of normality with high power for a wide range of alternatives.…”
Section: Introductionmentioning
confidence: 99%
“…For a survey of classical methods see del Barrio et al (2000), section 3, and Henze (1994), and for comparative simulation studies, see Baringhaus et al (1989); Farrell and Rogers-Stewart (2006); Landry and Lepage (1992); Pearson et al (1977); Romão et al (2010); Shapiro et al (1968); Yap and Sim (2011). For a survey on tests of multivariate normality see Henze (2002), for recent multivariate tests see , and for new developments on normality tests for Hilbert space valued random elements, see Henze and Jiménez-Gamero (2021); Kellner and Celisse (2019).…”
Section: Introductionmentioning
confidence: 99%