2016
DOI: 10.1177/2150137816660584
|View full text |Cite
|
Sign up to set email alerts
|

Calculating and Reporting Estimates of Effect Size in Counseling Outcome Research

Abstract: The reporting effect sizes (ESs) and confidence intervals (CIs) of ESs has become recommended practice in the social sciences; however, these values are frequently omitted by authors in manuscripts submitted for publication. Consequently, the meaningfulness and clinical relevance of their findings go unaddressed. As a result, a growing number of scholarly journals now require researchers to incorporate findings of clinical significance in their reporting of results. In this article, we review the most common c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
6

Relationship

5
1

Authors

Journals

citations
Cited by 49 publications
(20 citation statements)
references
References 41 publications
0
19
0
1
Order By: Relevance
“…A more conservative method of determining treatment effectiveness relies on using the standard error of the mean difference—a parameter estimate of the error associated with the overall mean difference score—as a threshold level for determining treatment effectiveness (Watson, Lenz, Schmit, & Schmit, ). For both treatment approaches, the standard error of the mean difference was 0.21.…”
Section: Resultsmentioning
confidence: 99%
“…A more conservative method of determining treatment effectiveness relies on using the standard error of the mean difference—a parameter estimate of the error associated with the overall mean difference score—as a threshold level for determining treatment effectiveness (Watson, Lenz, Schmit, & Schmit, ). For both treatment approaches, the standard error of the mean difference was 0.21.…”
Section: Resultsmentioning
confidence: 99%
“…For example, Tasca and Gallop (2009) reported an advantage of multilevel modeling is that the assumption of sphericity is not required and data collection does not need to follow a rigid schedule, as is required with other analysis, such as repeated ANOVAs. Finally, in reporting results, the omission of effect size renders the results meaningless (Watson et al, 2016). Indeed, Watson and colleagues recognized the importance of framing statistically significant findings with effect size and confidence intervals.…”
Section: Methods and Resultsmentioning
confidence: 99%
“…Negative effect sizes represented greater efficacy of IPBH over other primary care treatments, and greater magnitudes represented larger effects in standard deviation units. Unique and mean effect sizes were interpreted using the process described by Watson, Lenz, Schmit, and Schmit (), wherein effect size values were (a) interpreted using data‐driven conventions as small (≥0.30), medium (≥0.50), and large (≥0.67); (b) conceptualized in units of standard deviation; (c) situated within clinical context; and (d) represented through visual depictions. Finally, we computed prediction intervals surrounding the mean effect size to describe the potential distribution of a new study's potential effect size based on our sample of studies.…”
Section: Methodsmentioning
confidence: 99%