2017
DOI: 10.1177/0149206317710723
|View full text |Cite
|
Sign up to set email alerts
|

Advancing Theory by Assessing Boundary Conditions With Metaregression: A Critical Review and Best-Practice Recommendations

Abstract: Understanding boundary conditions, or situations when relations between variables change depending on values of other variables, is critical for theory advancement and for providing guidance for practice. Metaregression is ideally suited to investigate boundary conditions because it provides information on the presence and strength of such conditions. In spite of its potential, results of our review of 63 metaregression articles published in

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
131
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 117 publications
(132 citation statements)
references
References 101 publications
1
131
0
Order By: Relevance
“…In line with current conventions (Eisend, Evanschitzky, and Gilliland, ), this research transformed these corrected effect sizes into Fisher's z ‐coefficients and adjusted the effect size of each study using the inverse variance weight to account for varying sample sizes in the studies and then reconverted them into correlation coefficients (Hedges and Olkin, ). Following recent meta‐analysis (Eisend, Evanschitzky, and Gilliland, 2016; Kraft and Bausch, ) and recommendations (Gonzalez‐Mulé and Aguinis, ), the random effects approach was chosen in this study to synthesize effect sizes because it is more conservative (e.g., larger confidence intervals of the mean effect sizes) than the fixed effects model and not subject to Type I bias in significance tests (Geyskens et al, ; Lipsey and Wilson, ). While the fixed effects approach assumes that variability only stems from within‐study variance, the random effects model attributes variability from both within‐study variance and between‐study variance (Lipsey and Wilson, ).…”
Section: Methodsmentioning
confidence: 99%
“…In line with current conventions (Eisend, Evanschitzky, and Gilliland, ), this research transformed these corrected effect sizes into Fisher's z ‐coefficients and adjusted the effect size of each study using the inverse variance weight to account for varying sample sizes in the studies and then reconverted them into correlation coefficients (Hedges and Olkin, ). Following recent meta‐analysis (Eisend, Evanschitzky, and Gilliland, 2016; Kraft and Bausch, ) and recommendations (Gonzalez‐Mulé and Aguinis, ), the random effects approach was chosen in this study to synthesize effect sizes because it is more conservative (e.g., larger confidence intervals of the mean effect sizes) than the fixed effects model and not subject to Type I bias in significance tests (Geyskens et al, ; Lipsey and Wilson, ). While the fixed effects approach assumes that variability only stems from within‐study variance, the random effects model attributes variability from both within‐study variance and between‐study variance (Lipsey and Wilson, ).…”
Section: Methodsmentioning
confidence: 99%
“…Theorizing alternative or shifting (extending or contracting) boundary conditions of extant theorizing is a third distinctive avenue through which a review may shed new light on a phenomenon. While scholars might agree on the importance of understanding alternative or shifting boundaries, and thus the consequent need to establish boundary conditions (Gonzalez‐Mule and Aguinis, ) the definitions of how boundary conditions might be understood are imprecise (Busse et al, ), potentially polarizing a research area. Boundary conditions are articulated by Whetten (, p. 492) as ‘plac[ing] limitations on the propositions generated from a theoretical model’.…”
Section: Some Avenues For Advancing Theory With Reviewsmentioning
confidence: 99%
“…In some cases, the identification of contingent factors may elucidate puzzling null results (e.g., Post and Byron, ). Further, a theoretically contributive literature review may introduce and empirically test moderators that cannot be ascertained in primary studies (Gonzalez‐Mulé and Aguinis, ). Data for these moderators may come from the studies under review (e.g., study design, operationalization of constructs, sample characteristics) or may be collected from external sources (e.g., country‐level data that correspond to the primary study’s national research setting).…”
Section: Some Avenues For Advancing Theory With Reviewsmentioning
confidence: 99%
“…These were later adopted by criminal justice researchers Mark Lipsey and David Wilson in 2001, and the approach has become increasingly popular. Indeed, over half of the MARAs in Gonzalez‐Mulé and Aguinis’s () methodological review were published after 2010. Unlike personnel selection researchers (e.g., Hunter et al, ) who were mostly concerned with effect size (e.g., How well does a job selection test predict job performance?…”
Section: Meta‐analytic Regression Analysis (Mara)mentioning
confidence: 99%
“…Accordingly, the risks of Type I error due to sampling error is very high in MARA, and these risks grow when researchers use MARA to test many potential moderators simultaneously and without strong theoretical justification (Schmidt, ). Testing moderators backed by strong theory is one way to minimize capitalizing on chance (Schmidt, ), but Gonzalez‐Mulé and Aguinis () recommend that adequate statistical power is established (and reported) prior to using MARA.…”
Section: Meta‐analytic Regression Analysis (Mara)mentioning
confidence: 99%