Purpose – The purpose of this paper is to address the issue of failing to psychometrically test questionnaire instruments when measuring university students’ attitudes towards plagiarism. These issues are highlighted by a psychometric evaluation of a commonly used (but previously untested) plagiarism attitudinal scale. Design/methodology/approach – The importance of psychometric testing is shown through an analysis of a commonly used scale using modern techniques (e.g. Rasch analysis) on 131 undergraduate education students at an Australian university. Findings – Psychometric analysis revealed the scale to be unreliable in its present form. However, when reduced to an eight-item subscale it became marginally reliable. Research limitations/implications – The main implication of this paper is that questionnaire instruments cannot be assumed to function as they are intended without thorough psychometric testing. Practical implications – The paper offers valuable insight into the psychometric properties of a previously untested but commonly used plagiarism attitudinal scale. Originality/value – The paper offers a straightforward and easy to understand introduction to researchers in higher education who use questionnaires/surveys in their research but lack an understanding of why psychometric testing is so critical. While similar papers have been written in other fields which advocate psychometric approaches, such as Rasch analysis, this has not been the case in higher educational research (or mainstream educational research for that matter).
Students from diverse backgrounds report that time pressures, financial responsibilities, caring commitments, and geographic location are barriers to their uptake of work integrated learning (WIL). Through interviews with 32 students and 15 educators who participated in online WIL, we investigated whether online WIL might be one way of overcoming these barriers. Benefits of online WIL for students included employability skills, meaningful work, affordability, and flexibility when coping with health issues. Challenges for students included missing out on workplace interactions, digital access, and finding a private space in which to work. Students from diverse backgrounds were viewed by educators as bringing positive contributions to the workplace. Educators found challenges in giving feedback and not being able to replicate some aspects of in-person workplaces. We conclude with recommendations on how online WIL might be enhanced to better meet the needs of students facing equity issues. Implications for practice and policy: All participants in online WIL should be encouraged to intentionally view diversity as a strength. Educators need to create explicit opportunities for formal and informal interaction and network building during online WIL. Educators should provide engaging and purposeful work during online WIL. Students may need additional financial or material support to undertake online WIL, for example to enable digital access and access to a private workspace.
Applicants to universities present profiles of performance in a variety of relevant content areas as evidence for selection. Even though profiles of different applicants may involve different content areas, the applicants may be competing for places to the same university and even to the same program of study. In that case, and if there are more eligible applicants than there are places, universities must reconcile these different profiles in order to make comparisons among them. When there is a small number of applicants, then these comparisons may be camed out qualitatively; when the number of applicants runs into the order of thousands and there is a short time in which to make the offers, some quantitative analysis is required. This quantitative analysis usually involves aggregating the components of each profile in order to form a single score from which comparisons among applicants can be made readily. Legitimate concerns can be raised regarding forming simple aggregates from diverse components of profiles, but despite these concerns, the practical problem of making relatively rapid decisions means that these concerns are generally not addressed. The premise of this article is that profiles will be more or less consistent among the components and that although some profiles may not be, a great number of others may be, summarized adequately by a single score. It is shown that by applying the principles of latent trait test theory at the level of tests, it is possible to rank order a Requests for reprints should be sent to Jim Tognolini, Educational Testing Centre, University of New South Wales, Kensington, New South Wales 2033, Australia. Downloaded by [University of Washington Libraries] at 17:44 30 March 2015 TOGNOLINI AND ANDRICHset of profiles in terms of the adequacy with which they are summarized by a single score, and that as a result, only a subset of the original profiles may require a qualitative analysis. The application of the procedure is illustrated with a random sample of 577 profiles from a population of 12,314, which were presented for selection into universities in Western Australia in 1986.Increasing school populations in the postcompulsory years of schooling in Australia together with a restricted number of places in tertiary institutions has heightened awareness about the processes involved in the selection and rejection of applicants. In this article, we restrict ourselves to tertiary institutions that are universities. Applicants to universities present profiles of performance in a variety of relevant content areas as evidence for selection. Even though profiles of different applicants may involve different content areas, the applicants may be competing for places to the same institution and even to the same program of study. Inevitably, when there are more applicants than there are positions, these profiles need to be compared. If, as in the applications for some positions in employment, there are relatively few applicants, then a qualitative analysis and comparisons among the profile...
This study aims to assess the validity of the Online Multiliteracy Assessment for students in Years 5 and 6. The Online Multiliteracy Assessment measures students' abilities in making and creating meaning, using a variety of different modes of communication, such as text, audio and video. The study involved selecting two groups of students: the first group (n=19) was used in two pilot studies of the items and the second (n=299) was used in a field trial validating the functioning of the items and assessing the quality of the scale. The results indicated that the Online Multiliteracy Assessment has acceptable test-retest reliability; however, the fit to the Rasch model was less than ideal. Further investigation identified two important areas for improvement. First, the items assessing the higher order skills of synthesising, communicating and creating need to be more cognitively demanding. Second, some items need to be modified in order to improve their functionality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.