The Information Literacy Test (ILT) was developed to meet the need for a standardized instrument that measures student proficiency regarding the ACRL Information Literacy Competency Standards for Higher Education. The Web-based, multiple-choice ILT measures both lower-and higher-order skills. Evidence is presented that the ILT scores provide reliable and valid measures of information literacy. In addition, a rigorous standard setting method was used to identify score values corresponding to various absolute levels of proficiency. The ILT can be used to help institutions measure student information literacy outcomes and determine the effectiveness of instruction programs.nformation literacy is a set of competencies that provides a foundation for academic coursework, effective job performance, active citizenship, and lifelong learning. The ALA Presidential Commi ee defined information literacy as the ability to "recognize when information is needed" and then "locate, evaluate and use effectively the needed information."1 The sheer abundance of information available in the world today can be overwhelming, and not all of it is reliable. Individuals need to become proficient in the set of skills known as information literacy to be able to conduct an efficient search for information, think critically about the value of a particular piece of information, select sources that are high in quality, and then use the information to accomplish a purpose. This set of skills is important to general education, as well as virtually every major offered in higher education. Information literacy competencies appropriate for higher education have been defined in the form of five standards and twenty-two performance indicators by ACRL.2 Instruction programs at college and university libraries provide course-related instruction, tutorials, and other interventions to support student development of information literacy skills. Many programs encourage faculty/librarian collaboration with the goal of helping students develop these skills. In a growing number of institutions, information literacy is formally integrated into the curricula of general education and the majors.3 With so much interest and em-
This inquiry had 2 components: (1) the first was substantive and focused on the comparability of paper-based and computer-based test forms and (2) the second was a within-study comparison wherein a quasi-experimental method, propensity score matching, was compared with a credible benchmark method, a within-subjects design. The tests used in the comparison of online tests and paper-based tests were End-of-Course tests in Algebra and English, in a statewide high school testing program. Students were tested in Grades 8 and 9. In general, the substantive studies suggested that the online and paper tests appeared to be measuring the same underlying constructs with the same level of reliability. The within-study portion of the investigation indicated that propensity score matching study yielded results that were virtually identical to the outcome of the more conventional within-subjects experimental design. Both the methodological and substantive aspects of this investigation yielded outcomes that should be of interest to investigators in both of these areas.
For some students, standardized tests serve as a conduit to disclose sensitive issues of harm or distress that may otherwise go unreported. By detecting this writing, known as crisis papers, testing programs have a unique opportunity to assist in mitigating the risk of harm to these students. The use of machine learning to automatically detect such writing is necessary in the context of online tests and automated scoring. To achieve a detection system that is accurate, humans must first consistently label the data that are used to train the model. This paper argues that the existing guidelines are not sufficient for this task and proposes a three-level rubric to guide the collection of the training data. In showcasing the fundamental machine learning procedures for creating an automatic text classification system, the following evidence emerges as support of the operational use of this rubric. First, hand-scorers largely agree with one another in assigning labels to text according to the rubric. Additionally, when this labeled data are used to train a baseline classifier, the model exhibits promising performance. Recommendations are made for improving the hand-scoring training process, with the ultimate goal of quickly and accurately assisting students in crisis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.