This study aimed to evaluate the reliability and validity of the College Entrance Test (CET) used by a teacher education institution to assess the general and specialization knowledge of prospective students in English, Mathematics, Science and Social Studies. The Rasch model was employed to analyze the data collected from the sample of 250 test takers for the general knowledge test, 122 for specialization in English, 74 for Mathematics, 122 for science and 77 for Social Studies. The measurement analysis components including person and item reliability, unidimensionality, person-item map, fit statistics, Point Measure Correlation (PTMEA) and item local dependence were used. The study findings indicated that some test components had poor person reliability and the degree of item difficulty was higher than the students' abilities. There were also concerns about the conformity to the unidimensionality criteria, suggesting an analysis of items that might form another dimension in the constructs of the tests, although each of the items is independent. Furthermore, some items were misfitting or overfitting the Rasch model. In conclusion, the CET needs improvement to ensure its quality as a reliable and valid selection tool for the college. The study's results provide significant insights into the CET's strengths and weaknesses that can guide the test developers in revising and enhancing the CET to effectively measure the general and specialization knowledge of test takers in English, Mathematics, Science and Social Studies.