A vaidy of the discourse structure of oral language proficiency interviews focused on (1) one principal discourse variable, topic, for analyVng contingency and goal orientation in dyadic interactions, and (2) contextual factors (interlocutor, theme, task, participant gender). Data came from 30 dyadic oral interviews in English as a Second Language recorded in Brazil and Italy. The interview was part of the First Certificate in English examination of the University of Cambridge (England) Local Examinations Syndicate. Portions of the interview analyzed included three tasks: discussion based on photographs; relation of a printed passage to the photographs; and expression of personal preferences about items in a list of activities related to the interview's theme. The study was principally exploratory and descriptive. Results are discussed in terms of the characteristics of native-speaker/non-native-speaker oral interaction. It was found that the two parties made very different contributions to the discourse, with the examiner exerting a controlling influence and the examinee having a more reactive role. Contextual factors found to affect only candidate discourse included individual differences among examiners, especially gender, and task. Contextual influences on the examiner's goal orientation appeared to include gender and interview theme. The major influence on discourse as a whole was task. (MSE)
Content considerations are widely viewed to be essential in the design of language tests, and evidence of content relevance and coverage provides an important component in the validation of score interpretations. Content analysis can be viewed as the application of a model of test design to a particular measurement instrument, using judgements of trained analysts. Following Bachman (1990), a content analysis of test method characteristics and components of communicative language ability was performed by five raters on six forms of an EFL test from the University of Cambridge Local Examinations Syndicate. To investigate rater agreement, generalizability analysis and a new agreement statistic (the rater agree ment proportion or 'RAP') were used. Results indicate that the overall level of rater agreement was very high, and that raters were more consistent in rating method than ability. To examine interform comparability, method/ability content analysis characteristics (called 'facets') which differed by more than one standard deviation of either form were deemed to be salient. Results indicated that not all facets yielded substantive information about interform content comparability, although certain test characteristics could be targeted for further revision and development. The relationships between content analysis ratings and two-para meter IRT item parameter estimates (difficulty and discrimination) were also investigated. Neither test method nor ability ratings by themselves yielded consist ent predictions of either item discrimination or difficulty across the six forms examined. Fairly high predictions were consistently obtained, however, when method and ability ratings were combined. The implications of these findings, as well as the utility of content analysis in operational test development, are dis cussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.