Limited Print and Electronic Distribution RightsThis document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org/pubs/permissions.The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.
Disasters are typically unforeseen, causing most social and behavioral studies about disasters to be reactive. Occasionally, predisaster data are available, for example, when disasters happen while a study is already in progress or where data collected for other purposes already exist, but planned pre-post designs are all but nonexistent. This gap fundamentally limits the quantification of disasters’ human toll. Anticipating, responding to, and managing public reactions require a means of tracking and understanding those reactions, collected using rigorous scientific methods. Oftentimes, self-reports from the public are the best or only source of information, such as perceived risk, behavioral intentions, and social learning. Significant advancement in disaster research, to best inform practice and policy, requires well-designed surveys with large probability-based samples and longitudinal assessment of individuals across the life-cycle of a disaster and across multiple disasters.
Adoption of new instructional standards in science demands high-quality information about classroom practice. Teacher portfolios can be used to assess instructional practice and support teacher self-reflection anchored in authentic evidence from classrooms. This study investigated a new type of electronic portfolio tool that allows efficient capture of classroom artifacts in multimedia formats using mobile devices. We assess the psychometric properties of measures of quality instruction in middle school science classrooms derived from the contents of portfolios collected using this novel tool—with instruction operationalized through dimensions aligned to the Next Generation Science Standards. Results reflect low rater error and adequate reliability for several dimensions, a dominant underlying factor, and significant relations to some relevant concurrent indicators. Although no relation was found to student standardized test scores or course grades, portfolio ratings did relate to student self-efficacy perceptions and enjoyment of science. We examine factors influencing measurement error, and consider the broader implications of the results for assessing the validity of portfolio score interpretations, and the feasibility and potential value of this type of tool for summative and formative uses, in the context of large-scale instructional improvement efforts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.