Handbook of Computational Econometrics 2009
DOI: 10.1002/9780470748916.ch6
|View full text |Cite
|
Sign up to set email alerts
|

Bootstrap Hypothesis Testing

Abstract: This paper surveys bootstrap and Monte Carlo methods for testing hypotheses in econometrics. Several different ways of computing bootstrap P values are discussed, including the double bootstrap and the fast double bootstrap. It is emphasized that there are many different procedures for generating bootstrap samples for regression models and other types of model. As an illustration, a simulation experiment examines the performance of several methods of bootstrapping the supF test for structural change with an un… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
81
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 122 publications
(82 citation statements)
references
References 105 publications
(102 reference statements)
1
81
0
Order By: Relevance
“…The relevant test statistics under the null hypothesis are derived and the process is repeated 10,000 times to generate a bootstrap distribution for each test statistic. For hypothesis testing, we obtain a bootstrap p-value as the proportion of the test statistics from the null hypothesis distribution that are more extreme than the observed test statistic (Davidson and MacKinnon, 2006;MacKinnon, 2009). Results in Table 6 provide further support for our previous Table 4 results confirming the significant difference in the coefficients of RESP between the 1993-2001 and the Table 7 Regressions of ARET on RESP and tests of RESP coefficients between subsamples (2-dimension clustering procedures).…”
Section: Tests Of Resp Coefficients Between Subsamples: the Fama-macbsupporting
confidence: 68%
“…The relevant test statistics under the null hypothesis are derived and the process is repeated 10,000 times to generate a bootstrap distribution for each test statistic. For hypothesis testing, we obtain a bootstrap p-value as the proportion of the test statistics from the null hypothesis distribution that are more extreme than the observed test statistic (Davidson and MacKinnon, 2006;MacKinnon, 2009). Results in Table 6 provide further support for our previous Table 4 results confirming the significant difference in the coefficients of RESP between the 1993-2001 and the Table 7 Regressions of ARET on RESP and tests of RESP coefficients between subsamples (2-dimension clustering procedures).…”
Section: Tests Of Resp Coefficients Between Subsamples: the Fama-macbsupporting
confidence: 68%
“…(13) . For the jth repetition, we use the empirical distribution of the predicted errors (Fox, 2008;MacKinnon et al, 2009):…”
Section: Appendix a Estimation Methodsmentioning
confidence: 99%
“…One of the widely known and applied bootstrap Data Generating Processes (DGP)s are the residual bootstrap, pairs bootstrap and wild bootstrap. As noted by MacKinnon (), in the case of heteroskedastic errors, the residual bootstrap cannot be used, as this methods assumes independent and identically distributed errors. As we assume heteroskedastic errors and high leverage observations (see Sections and , in the following we exclusively concentrate on the pairs and wild bootstrap.…”
Section: Spatial Bootstrap Methodsmentioning
confidence: 99%