Past research on cultural correlates of biased items in multiple‐choice tests have almost always considered post‐hoc explanations that have not been very convincing and they have considered only differential subgroup responses to the correct or preferred answer. In an effort to determine the degree to which culture explains the difference between Chinese and Americans’ responses to a situational judgement test (SJT), 135 options to 25 SJT items were rated on the degree to which they reflected seven dimensions of culture. Ratings of the cultural content of the options were significantly and moderately correlated with Chinese–American differences in Most Likely responses to the options. In addition, a loglinear analysis of differential distractor functioning (DDF) yielded results consistent with the hypothesis that culture was at least partly the reason for subgroup differences in option choices and the resultant findings of DDF. The results suggest that SJTs and perhaps other non‐academic measures must be used with caution across cultural groups.
Practitioner points
Consultation with experts and test takers in the cultures with which you hope to use an instrument should always precede test use. Review with them both the item stems and options (if any) for items that may have a different meaning across cultural groups.
Measurement invariance of instruments used with different cultural groups should be assessed. When sample sizes do not permit sophisticated analyses such as confirmatory factor analyses or item response theory analyses, examine and seek to understand any simple descriptive statistics that reflect a difference across cultural groups.
There should be an examination of examine response differences to distractors in multiple‐choice items as well as differences in responses to correct options.