The present study tries to investigate the fairness of an English reading comprehension test employing Kunnan's (2004) test fairness framework (TFF) as the most comprehensive model available for test fairness. The participants of this study comprised 300 freshman students taking general English course chosen based on the availability sampling, three test developers, and seven university officials, administering the test. The main instrument is the teacher-made reading comprehension test to examine its validity, reliability, and differential item functioning (DIF). Furthermore, to examine the other modules of TFF, namely, access, administration, and social consequence, a questionnaire and semi-structured interviews with the test developers and test administrators were applied. In order to analyze the data, exploratory factor analysis is used to evaluate test validity employing Minitab software. Moreover, t test and ANOVA were used to examine the disparate impact of the test using SPSS package. Furthermore, the Mantel-Haenszel procedure is applied to determine the DIF, while coding the required formulas in R programming. The frequencies of different aspects of access and test administration are explained and consolidated by qualitative data gleaned from interview sessions. Examining the data with respect to TFF modules, it was concluded the test must be enhanced in terms of validity, while it is totally fair. The statistical procedure and the mixed research design implemented in the present study can be a sound model to be applied by test developers to enhance the test fairness of the exams.