2017
DOI: 10.3758/s13423-017-1323-7
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian inference for psychology. Part II: Example applications with JASP

Abstract: Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

12
1,028
0
7

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 1,358 publications
(1,090 citation statements)
references
References 61 publications
12
1,028
0
7
Order By: Relevance
“…The analyses were executed in JASP (jasp-stats.org; JASP team, 2017; Wagenmakers et al, in press;2017). On the OSF, JASP output can be viewed without having JASP installed.…”
Section: Resultsmentioning
confidence: 99%
“…The analyses were executed in JASP (jasp-stats.org; JASP team, 2017; Wagenmakers et al, in press;2017). On the OSF, JASP output can be viewed without having JASP installed.…”
Section: Resultsmentioning
confidence: 99%
“…Once the models were built, the researcher had only to "turn the crank" of probabilistic inference and posterior probabilities are obtained through standard mechanisms that rely on little other than the sum and product rules of probability. As this example illustrates, the practical computation of posterior probabilities will often rely on calculus or numerical integration methods; several papers in this special issue deal with computational software that is available (39,58,64,65).…”
Section: Discussionmentioning
confidence: 99%
“…Recent computational developments have made the computation of Bayes factors more tractable, especially for common scenarios (64,65). For uncommon or complex scenarios, one might resort to reporting a different model comparison metric that does not rely on the marginal likelihood, such as the various information criteria (AIC, BIC, DIC, WAIC) or leave-one-out cross validation (LOOCV; see 56,59,60).…”
Section: T Ementioning
confidence: 99%
See 1 more Smart Citation
“…Choosing a narrower distribution (i.e., a lower value of r), would result in a high level of similarity between H 1 and H 0 , making the tests uninformative. Conversely, choosing an unreasonably wide distribution would result in the BF favouring the null too heavily, by placing too much weight on extreme effect size values (Etz et al, 2017;Rouder et al, 2009;Wagenmakers et al, 2017). In addition to calculating the BF, robustness checks and sequential analyses (combined with a robustness analysis) of the BF were carried out and visualized in plots.…”
Section: Statistical Proceduresmentioning
confidence: 99%