Aseptic Non Touch Technique (ANTT) version 2 is an updated theoretical and practice framework expanding on the foundations set by ANTT v1, which was first published almost a decade ago, and has been adopted widely. ANTT v2 rationalizes an alternative and contemporary approach to aseptic practice, rather than the historically hierarchal paradigm of sterile, aseptic and clean techniques. To reflect current practice and reduce unnecessary complication, v2 introduces the theory and consolidates the practice of using micro aseptic fields to protect key-parts. It is advocated that micro aseptic fields are optimal and should be used whenever practically possible. Version 2 is intended as a principle approach to all aseptic practice no matter how simple or complicated clinical procedures may be. In other words, the principles of ANTT are as applicable to the surgeon as they are to the nurse or phlebotomist
All educational testing is intended to have consequences, which are assumed to be beneficial, but tests may also have unintended, negative consequences (Messick, 1989). The issue is particularly important in the case of large-scale standardised tests, such as Australia's National Assessment Program-Literacy and Numeracy (NAPLAN), the intended benefits of which are increased accountability and improved educational outcomes. The NAPLAN purpose is comparable to that of other state and national 'core skills' testing programs which evaluate cross-sections of populations in order to compare results between population sub-groupings. Such comparisons underpin 'accountability' in the era of population-level testing. This study investigates the impact of NAPLAN testing on one population grouping that is prominent in the NAPLAN results comparisons and public reporting: children in remote Indigenous communities. A series of interviews with principals and teachers documents informants' first-hand experiences of the use and effects of NAPLAN in schools. In the views of most participants, the language and content of the test instruments, the nature of the test engagement and the test washback have negative impacts on students and staff, with little benefit in terms of the usefulness of the test data. The primary issue is the fact that meaningful participation in the tests depends critically on proficiency in Standard Australian English (SAE) as a first language. This study contributes to the broader discussion of how reform-targeted standardised testing for national populations affects subgroups who are not treated equitably by the test instrument or reporting for accountability purposes. It highlights a conflict between consequential validity and the notion of accountability which drives reform-targeted testing.
Objects that sit between intersecting social worlds, such as Language for Specific Purposes (LSP) tests, are boundary objects -dynamic, historically derived mechanisms which maintain coherence between worlds (Star & Griesemer, 1989). They emerge initially from sociopolitical mandates, such as the need to ensure a safe and efficient workforce or to control immigration, and they develop into standards (i.e. stabilized classifying mechanisms). In this article, we explore the concept of LSP test as boundary object through a qualitative case study of the Occupational English Test (OET), a test which assesses the English proficiency of healthcare professionals who wish to practise in English-speaking healthcare contexts. Stakeholders with different types of vested interest in the test were interviewed (practising doctors and nurses who have taken the test, management staff, professional board representatives) to capture multiple perspectives of both the test-taking experience and the relevance of the test to the workplace. The themes arising from the accumulated stakeholder perceptions depict a 'boundary object' that encompasses a work-readiness level of language proficiency on the one hand and aspects of communication skills for patient-centred care on the other. We argue that the boundary object metaphor is useful in that it represents a negotiation over the adequacy and effects of a test standard for all vested social worlds. Moreover, the test should benefit the worlds it interconnects, not just in terms of the impact on the learning opportunities it offers candidates, but also the impact such learning carries into key social sites, such as healthcare workplaces.
This study, which forms part of the TOEFL iBT® test validity argument for the writing section, has two main aims: to verify whether the discourse produced in response to the independent and integrated writing tasks differs and to identify features of written discourse that are typical of different scoring levels. The integrated writing task was added to the TOEFL iBT test to “improve the measurement of test‐takers' writing abilities, create positive washback on teaching and learning as well as require test‐takers to write in ways that are more authentic to academic study” (Cumming et al., 2006, p. 1). However, no research since the study by Cumming et al. (2006) on the prototype tasks has investigated if the discourse produced in response to this new integrated reading/listening‐to‐write task is in fact different from that produced in response to the independent task. Finding such evidence in the discourse is important, as it adds to the validity argument of the TOEFL iBT writing test and is useful for a verification of the rating scale descriptors used in operational rating. This study applied discourse‐analytic measures to the writing of 480 test takers who each responded to the two writing tasks. The discourse analysis focused on measures of accuracy, fluency, complexity, coherence, cohesion, content, orientation to source evidence, and metadiscourse. An analysis with a multivariate analysis of variance (MANOVA) using a two‐by‐five (task type by proficiency level) factorial design with random permutations showed that the discourse produced by the test takers varies significantly on most variables under investigation. The discourse produced at different score levels also generally differed significantly. The findings are discussed in terms of the TOEFL iBT test validity argument. Implications for rating scale validation and automated scoring are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.