1996
DOI: 10.1037/1040-3590.8.4.341
|View full text |Cite
|
Sign up to set email alerts
|

The new rules of measurement.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

4
292
0
13

Year Published

2002
2002
2015
2015

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 382 publications
(309 citation statements)
references
References 19 publications
4
292
0
13
Order By: Relevance
“…Price & Dalgleish, 2010), we developed the CICS. Regarding the hypothesis about college students' accuracy in reporting their level of involvement in acts of cyberbullying, we feel that the IRT analysis enabled us to interpret the results considering both person and item aspects accurately (DeMars, 2010;Embretson, 1996). Thus, results revealed from the reliability values that the item scores were good.…”
Section: Discussionmentioning
confidence: 86%
See 2 more Smart Citations
“…Price & Dalgleish, 2010), we developed the CICS. Regarding the hypothesis about college students' accuracy in reporting their level of involvement in acts of cyberbullying, we feel that the IRT analysis enabled us to interpret the results considering both person and item aspects accurately (DeMars, 2010;Embretson, 1996). Thus, results revealed from the reliability values that the item scores were good.…”
Section: Discussionmentioning
confidence: 86%
“…Therefore, unlike most studies involving issues regarding cyberbullying, we used the IRT approach, which enabled us to calibrate our participants and items on a common scale (DeMars, 2010;Embretson, 1996). This type of assessment allowed us to analyze the interactions between our participants and items, which in turn, helped us interpret the variables we wanted to measure.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…These parameters, estimated based on the IRT model, convey the strength of each item's relationship to the measured construct and indicate the range along the construct score continuum where an item provides the most reliable responses. Because the item parameters are estimated with respect to a clearly defined scale for the underlying latent variable that is being measured, IRT is said to have a "built-in" linking mechanism (Embretson, 1996;Linn, 1992;Mislevy, 1992). This important feature facilitates comparability of scores that represent a common construct but are derived from different sets of items.…”
Section: Irt and Irt-based Item Banking For Smoking Assessmentmentioning
confidence: 99%
“…The fact that the item banks have known characteristics allows developers to evaluate and exclude items that show unacceptable levels of DIF, and enables linking of scores from different forms and tailoring of tests for specific purposes while maintaining a pre-specified degree of measurement precision. This measurement flexibility also extends to a wide array of administration options and platforms-such as computer-based assessment, use of handheld devices such as smartphones and notepads, CAT, and tailored paper and pencil short forms-all of which minimize respondent burden without sacrificing reliability and precision (Embretson, 1996;Hambleton & Swaminathan, 1985;Lord, 1980;Wainer, 2000;Wainer & Mislevy, 2000). This flexibility has the potential to directly impact smoking research, particularly in situations where there is a need to be economical regarding item count and respondent burden.…”
Section: Irt and Irt-based Item Banking For Smoking Assessmentmentioning
confidence: 99%