2019
DOI: 10.1016/j.anr.2019.09.003
|View full text |Cite
|
Sign up to set email alerts
|

An Item Response Theory Analysis of the Korean Version of the CRAFFT Scale for Alcohol Use Among Adolescents in Korea

Abstract: This study aimed to validate the psychometric properties of the CRAFFT (Car, Relax, Alone, Forget, Family/Friends, Trouble) by using item response theory (IRT) and further examine gender differences in item-level responses. Methods: This study used the 13 th (2017) Korea Youth Risk Behavior Survey data conducted by the Korean Centers for Disease and Prevention and analyzed data of 8,568 students who reported drinking alcohol in the previous 30 days. IRT assumptions including unidimensionality, local independen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 29 publications
1
8
0
Order By: Relevance
“…Evidence shows the validity of assessment results is affected by the content tested, quality of test items, qualification of item writers, number of test items, presence of item writing flaws, and psychometric characteristics of items [ 5 , 9 – 11 , 14 , 20 , 30 , 31 ]. Item difficulty, measured by the percentage of examinees that correctly answered the item, runs from 0 to 1; easy items have a higher difficulty index [ 32 ]. Most studies classify item difficulty as too easy (≥ 0.8), moderately easy (0.7–0.8), desirable (0.3–0.7), and difficult (< 0.3) [ 22 , 33 – 37 ].…”
Section: Introductionmentioning
confidence: 99%
“…Evidence shows the validity of assessment results is affected by the content tested, quality of test items, qualification of item writers, number of test items, presence of item writing flaws, and psychometric characteristics of items [ 5 , 9 – 11 , 14 , 20 , 30 , 31 ]. Item difficulty, measured by the percentage of examinees that correctly answered the item, runs from 0 to 1; easy items have a higher difficulty index [ 32 ]. Most studies classify item difficulty as too easy (≥ 0.8), moderately easy (0.7–0.8), desirable (0.3–0.7), and difficult (< 0.3) [ 22 , 33 – 37 ].…”
Section: Introductionmentioning
confidence: 99%
“…Three studies applied IRT/Rasch analysis to structural validity. However, for one study [27] there were no values reported for the assumption tests for IRT. For another of the studies [29], model fit values (infit and outfit mean squares) were reported for the assumption test of unidimensionality, while no IRT-related values were reported for the third study [31].…”
Section: Structural Validitymentioning
confidence: 99%
“…In this case, the application of multiple-group CFA is recommended for investigating structural invariance between the cultural groups rather than separately conducting CFA in each population. Another study [27] used DIF to investigate item invariance by gender based on IRT.…”
Section: Internal Consistencymentioning
confidence: 99%
“…The responses may, therefore, be influenced by factors other than what the instrument was designed to measure. Given an individual's score on the latent trait, the observed items should be independent of each other (Debelak & Koller, 2020;Song et al, 2019). Independent means are statistically independent.…”
Section: Local Item Independencementioning
confidence: 99%
“…A test information function may be used to balance multiple alternate test forms for the same exam. TIF values should be the same across all alternate forms (Song et al, 2019).…”
Section: Item and Test Information Functionmentioning
confidence: 99%