Portfolio assessment (PA) is an important and increasingly common means of recording and judging language learners’ development and achievement. This chapter begins with a brief history of PA, from its earlier identity as an “alternative assessment” to its significant role in current assessment regimes in both general and language education. This is followed by a review of research on the effectiveness of PA for improving language learning and on students’ and teachers’ views of portfolio use in language classrooms. A conceptual distinction is then made between the portfolio “product” on one hand and the portfolio implementation “process” on the other, with the latter being the determinant of usefulness. Central to this chapter is a discussion of how teachers’ personal teaching beliefs and approaches will set the parameters for that process. For example, a teacher's views about the roles and responsibilities of the teacher vis‐à‐vis students and about the importance of student self‐assessment will crucially shape how portfolios are used. The chapter then considers both macro‐ and micro‐level decisions involved in the implementation process. Macro‐level decisions include whether portfolios will be used for formative or summative purposes or for both, how self‐assessment will be incorporated, and who conducts grading. Micro‐level decisions include the language skills to be assessed via portfolio, the types of materials to be included in the portfolio, the portfolio medium (e.g., paper or online), the grading rubric, and the types of feedback given to the student. In the final section, the author points out how PA can align with current principles and practices of assessment for learning purposes.
Feedback to the test taker is a defining characteristic of diagnostic language testing (Alderson, 2005). This article reports on a study that investigated how much and in what ways students at a Taiwan university perceived the feedback to be useful on an online multiple-choice diagnostic English grammar test, both in general and by students of higher and lower language proficiency. Stage 1 involved questionnaire data from 68 students who rated each item's feedback according to usefulness, and Stage 2 involved interviews with five students as they read the feedback after taking the test. The data from these two stages showed students' overall positive attitude toward the feedback and students' preferences for particular feedback characteristics. The study also found that although higher proficiency test takers found the feedback to be more useful than lower proficiency test-takers, views about the characteristics of good feedback were similar regardless of level. Recommendations for improving diagnostic language test construction and validation are discussed based upon the findings.
BACKGROUNDDiagnostic language testing, which aims to identify test takers' linguistic strengths and weaknesses so as to guide their learning, has received increasing attention after long neglect, as researchers have sought ways to make language assessments more oriented towards learning
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.