1978
DOI: 10.2307/2347163
|View full text |Cite
|
Sign up to set email alerts
|

Approximations for the Percentage Points of the Chi-Squared Distribution

Abstract: Summary Sixteen formulae for approximating percentage points of X2 were examined at each of 15 significance levels. The formula of Wilson and Hilferty is fairly good at most significance levels, but the best approximations without resorting to computer capability are obtained using the Cornish‐Fisher expansion or an empirically derived improvement of the Severo‐Zelen modification of the Wilson‐Hilferty formula.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

1993
1993
2018
2018

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(10 citation statements)
references
References 14 publications
0
10
0
Order By: Relevance
“…Note that, as in Examples 1 and 2, vve do not use Theorem l.l(b) here because, for n > 2/a, the bound in (1.14) will be larger than the one in (1.10). On the other hand, if a < 1, then the cases 2 < n < 2/a can be covered only by (1.14) 2ind (1.20) by choosing = It is worth mentioning here that Zar [21] carried out numerical comparisons of sixteen approximations to Fn{x) in the present case for various combinations of n and x. His conclusions indicate that, to the order of seems to be the most accurate approximation.…”
Section: Some Examplesmentioning
confidence: 82%
“…Note that, as in Examples 1 and 2, vve do not use Theorem l.l(b) here because, for n > 2/a, the bound in (1.14) will be larger than the one in (1.10). On the other hand, if a < 1, then the cases 2 < n < 2/a can be covered only by (1.14) 2ind (1.20) by choosing = It is worth mentioning here that Zar [21] carried out numerical comparisons of sixteen approximations to Fn{x) in the present case for various combinations of n and x. His conclusions indicate that, to the order of seems to be the most accurate approximation.…”
Section: Some Examplesmentioning
confidence: 82%
“… depicts a degree for the influence, which can simply be set to one. The inverse of the chi-square cumulative distribution function, with a desired confidence level  , can be estimated from any close approximation of the quantile function of the distribution, such as the Wilson-Hilferty method [48]. Finally, Condition A is defined as…”
Section: Online Parameters Self-adaptation For Gt2fggmentioning
confidence: 99%
“…However, as specified in the inequality in (48) and (47), we are leaving out the queried feature to be able to assess the impact of the removal of this feature on the Fratio. Thus, the smaller the resulting F-ratio, the higher the relevance of the feature.…”
Section: F Online Feature Ranking and Rule Re-scalingmentioning
confidence: 99%
“…(28) Take a set of n independent samples, t1, t2, ..., t11, drawn uniformly from the space T. The th estimate of 1(h) is I(h) = I h(t1). If a different region ofintegration is needed, an appropriate integral transformation can be applied.…”
Section: Monte Carlo-based Integrationmentioning
confidence: 99%
“…We use the Cornish-Fisher approximation [28] to the chi-square cumulative distribution function to obtain a k1 at the 99.9th percentile for some n. The left side of the equation below represents the set of all parameter values that yield sum-of-squares less than k1. We can take some maximum value, k1 > > no2 , such that sample points that yield a sum-of-squares value greater than k1 contribute relatively little to the integration, since the density asymptotically approaches zero.…”
Section: Importance Sampling For Peaked Integrandsmentioning
confidence: 99%