The difficulty index, discrimination and distracter efficiency of college level exam 10 paper was analyzed as an input for taking actions in future test developments. The exam 11 papers of 176 first-year regular pre-service diploma students at Gondar CTE were analyzed 12 using descriptive analysis. Difficulty indices and distracter efficiencies were calculated using 13 Microsoft Excel 2007. Other test statistics such as mean, bi-serial correlations and reliability 14 coefficients were computed using SPSS version 20. Results showed that the mean test score, 15 out of 31, was 17.23 ± 3.85. Average difficulty and discrimination indices were 0.56 (SD 0.20) 16 and 0.16 (SD 0.28), respectively. The mean distracter efficiency was 92.1% (SD 17.2%). The 17 reliability of the test was 0.58. Out of 31, 13 (41.9%) items were either too easy or too difficult. 18 Only two items fell into good or excellent discrimination power. Inconsistency in option19 formats and inappropriate stems were observed in the exam paper. Based on the results the 20 college level exam paper has acceptable level of difficulty index and distracter efficiency. 21 (0.16, acceptable ≥ 0.4). The 22 internal consistency reliability was also less than the acceptable level (0.58, acceptable ≥ 0.7). 23 However, the average discrimination power of exam was very low Thus, future test development interventions should give due emphasis on item reliability,24 discrimination coefficients and item face validity. 25 26 3 32Higher education institutes need to combine different approaches and instruments for assessing 33 students (5). This is because students' assessment and evaluation are an integral part of the 34 teaching -learning process (2). The assessments should be relevant while tracking each student's 35 performance in a given course. Considering this, instructors at higher institutions must be aware 36 of the quality and reliability of tests. Otherwise, the final results may be influenced by the test 37 itself, which could lead to a biased assessment (5). Usually instructors receive little or no training 38 on assessment quality. If training given, it doesn't focus on strategies to construct test or item-39 writing rules but only on large-scale test administration and standardized test score interpretation 40 (2). Tavakol and Dennick (2011) pointed out the importance of post-exam item analysis to 41 improve the quality and reliability of assessments. 42 43Item-analysis is the process of collecting, summarizing and using information from students' 44 responses to assess the quality of test items (21). It allows teachers to identify: too difficult or too 45 easy items, items that do not discriminate high and low able students or items that have 46 implausible distracters (2, 3). In these cases, teachers can remove too easy or too difficult items 47
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.