In the present study, comparability of scores from student evaluation of teaching forms was investigated. This is an important issue because scores given by students are used in decision making in higher education institutions. Three course-related variables (grade level, course type, and course credit) were used to define student subgroups. Then, multi-group confirmatory factor analysis was used to assess invariance of factorial structure, factor loadings and factor means across groups. It was found that although a common factorial structure held across groups, fully invariant factor loadings were observed only across instructors who teach different course types. For other groups, only partial invariance of factor loadings was obtained. Analyses also revealed that none of the subgroups had invariant factor means, indicating a possible bias. Results indicate that comparison of instructors based on student ratings may not be valid as it is mostly assumed.
The invariance in the scores of student rating of instruction was studied across high and low achieving classrooms. Achievement levels were determined by the two criteria such as self-reported expected grades and end of semester grades. The data included 625 classrooms. The equality of (i) factorial structure, (ii) factor loadings, (iii) item intercepts, and (iv) error variances of the 7 item rating scale were studied across these groups. With respect to self-reported expected grades, high and low achieving classes produced invariant scale characteristics except strict invariance. On the other hand, with respect to end of semester grades, full equality in item intercepts and error variances were not achieved. It seems that comparing the rating results across the classrooms and courses independent of the achievement levels of the students may be misleading especially for the high-stake decisions since the origin of the scale is not the same across high and low achieving groups.
The present study investigated differences between disadvantaged and resilient students in terms of sense of belonging, as measured in PISA 2012. To this end, a segmentation method was employed to define student segments differing in ratios of resilient students. Results indicated that there is a relationship between academic resiliency and sense of belonging. While the relationship between resiliency and the some predictors seemed to be varying, there are some predictors with direct relationships with academic resiliency.
SummaryThis paper presents a computer software developed by the author. The software conducts post-hoc simulations for computerized adaptive testing based on real responses of examinees to paper and pencil tests under different parameters that can be defined by user. In this paper, short information is given about post-hoc simulations. After that, the working principle of the software is provided and a sample simulation with required input files is shown. And last, output files are described.
ÖzetBu çalışmada yazar tarafından geliştirilmiş olan bir bilgisayar yazılımı tanıtılmaktadır. Söz konusu yazılım bilgisayar ortamından bireyselleştirilmiş test yaklaşımı için, kullanıcı tarafından tanımlanabilen farklı parametreler altında, bireylerin kağıt kalem testlerine verdikleri yanıtları kullanarak post-hoc simulasyonları yapmaktadır. Çalışmada once posthoc simulasyonlar hakkında kısa bir bilgi verilmekte, ardından yazılımın çalışma prensibi ve gerekli dosyalar ile birlikte örnek bir simulasyon gösterilmektedir. Son olarak da, çıktı dosyaları tanıtılmaktadır.
Post-hoc Simulations in computerized adaptive testingWhen developing computerized adaptive test (CAT) versions of paper and pencil tests, it is essential to conduct preliminary analyses to see how much CAT provides reduction in number of items and how well standard errors of ability estimations of examinees are. This is done through post-hoc simulations that are based on using examinees' previous response patterns provided for paper and pencil format of a test (Kalender, 2011a). Post-hoc simulations provide a hypothetical testing environment to simulate a test as if examinees are given a CAT-based test. Using real examinee responses provides a better description of examinees' psychometric characteristics (Wang, Bo-Pan, Harris;1999). Results of the post-hoc simulations yield information as to optimum CAT strategies (such as ability estimation methods, test termination rules, item exposure rates) that can be implemented in real CAT testing conditions (Kalender, 2011b).A post-hoc simulation approach can be described as follows: (i) when simulation starts for first examinee, an item is selected from the item bank and examinee's responses is checked to that item from the examinee's previous response set in paper and pencil version of the test, (ii) based on that response, an ability estimation is made for examinee and
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.