This project consisted of a meta-analysis of U.S. research published from 1980 to 2004 on the effect of specific science teaching strategies on student achievement. The six phases of the project included study acquisition, study coding, determination of intercoder objectivity, establishing criteria for inclusion of studies, computation of effect sizes for statistical analysis, and conducting the analyses. Studies were required to have been carried out in the United States, been experimental or quasi-experimental, and must have included effect size or the statistics necessary to calculate effect size. Sixty-one studies met the criteria for inclusion in the meta-analysis. The following eight categories of teaching strategies were revealed during analysis of the studies (effect sizes in parentheses): Questioning Strategies (0.74); Manipulation Strategies (0.57); Enhanced Material Strategies (0.29); Assessment Strategies (0.51); Inquiry Strategies (0.65); Enhanced Context Strategies (1.48); Instructional Technology (IT) Strategies (0.48); and Collaborative Learning Strategies (0.95). All these effect sizes were judged to be significant. Regression analysis revealed that internal validity was influenced by Publication Type, Type of Study, and Test Type. External validity was not influenced by Publication Year, Grade Level, Test Content, or Treatment Categories. The major implication of this research is that we have generated empirical evidence supporting the effectiveness of alternative teaching strategies in science. ß
The purpose of this study is to apply the technology acceptance model to examine the employees' attitudes and acceptance of electronic learning (e-learning) systems in organisations. This study examines four factors (organisational support, computer self-efficacy, prior experience and task equivocality) that are believed to influence employees' perceived usefulness, perceived ease of use, attitudes and intention to use e-learning systems. Participants were selected from Taiwanese companies that have already implemented e-learning systems. Three hundred and thirty-two valid questionnaires were collected, and structure equation modelling was conducted to test the research hypotheses. The findings provided practical implications for organisational trainers, educators and elearning system developers.
Background: Large-scale survey assessments have been used for decades to monitor what students know and can do. Such assessments aim at providing group-level scores for various populations, with little or no consequence to individual students for their test performance. Students' test-taking behaviors in survey assessments, particularly the level of test-taking effort, and their effects on performance have been a long-standing question. This paper presents a procedure to examine test-taking behaviors using response time collected from a National Assessment of Educational Progress (NAEP) computer-based study, referred to as MCBS. Methods: A five-step procedure was proposed to identify rapid-guessing behavior in a more systematic manner. It involves a non-model-based approach that classifies student-item pairs as reflecting either solution behavior or rapid-guessing behavior. Three validity checks were incorporated in the validation step to ensure reasonableness of the time boundaries before further investigation. Results of behavior classification were summarized by three measures to investigate whether and how students' test-taking behaviors related to student characteristics, item characteristics, or both. Results: In the MCBS, the validity checks offered compelling evidence that the recommended threshold-identification method was effective in separating rapid-guessing behavior from solution behavior. A very low percent of rapid-guessing behavior was identified, as compared to existing results for different assessments. For this dataset, rapid-guessing behavior had minimum impact on parameter estimation in the IRT modeling. However, the students clearly exhibited different behaviors when they received items that did not match their performance level. We also found disagreement between students' response-time effort and self reports, but based on the observed data, it is unclear whether the disagreement was related to how the students interpreted the background questions. Conclusions: The paper provides a way to address the issue of identifying rapid-guessing behavior, and sheds light on the question about students' extent of engagement in NAEP and the impact, without relying on students' self evaluation or additional costs in test design. It reveals useful information about test-taking behaviors in a NAEP assessment setting that has not been available in the literature. The procedure is applicable to future standard NAEP assessments, as well as other tests, when timing data are available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.