Psychometric Testing 2017
DOI: 10.1002/9781119183020.ch16
|View full text |Cite
|
Sign up to set email alerts
|

Testing Across Cultures: Translation, Adaptation and Indigenous Test Development

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…In line with the WWC model, this study shows the effect of macro-level factors on the student scores in the context of high-stakes writing assessments. The inclusion of tasks with differential construct manifestation and underrepresentation might introduce possible sources of bias into test scores against certain groups of students (Daouk-Öyry & Zeinoun, 2017). Taken together, our results show that writing scores might lead to unfair educational decisions not because of a particular approach to scoring written compositions (automated vs hand-rated) but rather because of the inclusion of other writing components in the test that are only partially associated with the ability to write a cohesive and well-organized text, namely, revising and editing skills.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In line with the WWC model, this study shows the effect of macro-level factors on the student scores in the context of high-stakes writing assessments. The inclusion of tasks with differential construct manifestation and underrepresentation might introduce possible sources of bias into test scores against certain groups of students (Daouk-Öyry & Zeinoun, 2017). Taken together, our results show that writing scores might lead to unfair educational decisions not because of a particular approach to scoring written compositions (automated vs hand-rated) but rather because of the inclusion of other writing components in the test that are only partially associated with the ability to write a cohesive and well-organized text, namely, revising and editing skills.…”
Section: Discussionmentioning
confidence: 99%
“…Computer programs can generate writing quality scores with comparable technical adequacy to hand-calculated scores (Keller-Margulis et al, 2021). Yet, the validity of automated scores might vary among students from different linguistic or cultural backgrounds (Daouk-Öyry & Zeinoun, 2017). Given that automated scores are calculated through algorithms trained to predict human ratings, they might incorporate the contribution of sources extraneous to writing quality in the estimation of student performance.…”
Section: Types Of Bias For Automated Writing Scoresmentioning
confidence: 99%
“…The findings of this study can broaden the discussion of cultural aspects related to the evaluation of the attachment construct in diverse Spanish-speaking populations, and also contributes to the existing instruments available in this context (e.g., Valor-Segura, Expósito, & Moya, 2009). Although the relevance of the construct in different cultures is recognized, it is necessary to take into account that its expression may vary in different cultures and groups (e.g., LGBT), and that some of its dimensions may be underrepresented (Daouk-Öyry & Zeinoun, 2017). Theoretical and empirical development is needed to better understand the cultural and contextual aspects of attachment, as these aspects are scarcely addressed in a systematic way (Keller, 2013;Vicedo, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…Then help was sought from an Urdu language expert to refine the Urdu translation. The questions were later back-translated to the English language with the help of an English–language expert (Daouk-Öyry & Zeinoun, 2017). Pilot interviews were conducted with ten participants to determine whether the questions in the interview schedule were well understood.…”
Section: Methodsmentioning
confidence: 99%