Over the past three decades, the number of developing countries participating in International Large-Scale Assessments (ILSAs) has increased dramatically, while developing countries' usage of data from ILSAs for informing educational policy has not been fully realised. In this paper the authors argue that for ILSAs to be useful policy tools, alignment must occur between ILSAs' measurement quality mission and their capacity development mission. They review the use of ILSAs as both 'whips' and 'thermometers' and discuss issues in measurement related to the range of skills assessed, their capacity to measure change over time, and the inclusion of other indicators to measure home, classroom and school factors related to student achievement. They then discuss how ILSAs have built assessment capacity in developing countries and what further changes would be needed to improve their utility for policymakers.
This paper focuses on the preliminary reading literacy test scores for NewZealand, of students in the grade levels where most 9-(Standard 3) and 14- (Form 4) year-olds were to be found. In addition to some within country analyses, New Zealand results are compared with the results for the probability samples of students in 32 systems of education which participated in the IEA study of Reading Literacy. Within country comparisons by gender and ethnicity reveal large differences in favour of Pakeha students at both levels, and gender differences in favour of girls particularly at the Standard 3 level. Between country comparisons of results suggest high levels of reading competence among the New Zealand students in the target age groups.It is self evident that the demands of schooling, work and citizenship in modern societies require some degree of literacy. By international standards the reading ability of New Zealand children has been considered to be high. Purves (1979) and Guthrie (1981) found that New Zealand 14-and 18-year-old students scored higher on tests of reading and literature than students in any other country who participated in the IEA studies in reading and literature which were conducted in 1970. When the factors which could account for these results were considered, Purves (1979) noted:All of these factors confirm that these students are members of a highly literate, or print oriented society (even though they watch a lot of television), and that reading is taken for granted, and not thought of as something out of the ordinary, (p. 14)Some two decades later, at the end of 1990, New Zealand, along with 30 countries (or educational systems), participated in an international study of reading literacy which was once again co-ordinated by the International Association for the Evaluation of Educational Achievement (IEA). Nine-and 14-year-old students in these countries completed reading tests in two booklets and a background questionnaire. In addition, Downloaded by [University of Strathclyde] at 09:07 14 December 2014 196 H. Wagemaker classroom teachers for the classes selected and the principal of the schools selected also completed questionnaires. Why Participate?Questions such as why a country would participate in international comparative studies such as reading literacy are often asked. While many people would not doubt the value of studies which reflect purely national or regional concerns, the value of comparative research is often less readily accepted. Questions most often raised include those related to the ability and indeed the appropriateness of making crossnational comparisons which are fair and sensitive to differences in curricula, structures and a country's stage of educational development. Clearly such questions may be asked of any research or statistically based cross-national comparative study.In one of the first official publications of IEA, Foshay et al. (1962) captured some of the main arguments for participating in cross-national studies, difficulties notwithstanding: If custo...
Although international large-scale assessment of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to address, and how technologies have evolved to achieve their stated goals. This book focuses on the work of the International Association for the Evaluation of Educational Achievement (IEA), with a particular emphasis on the methodologies and technologies that IEA employs to address issues related to the validity and reliability (quality) of its data. The context in which large-scale assessments operate has changed significantly since the early 1960s when IEA first developed its program of research. The last 60 years has seen an increase in the number of countries participating, with a concomitant expansion in the cultural, socioeconomic, and linguistic heterogeneity of participants. These quantitative and qualitative changes mean that the methodologies and assessment strategies have to evolve continuously to ensure the quality of data is not compromised. This chapter provides an introductory overview of the chronology and development of IEA's international large-scale assessments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.