In text classification, texts are represented as a high-dimensional and sparse matrix, whose dimension is the same as the total number of terms of all texts. Using all terms for text classification tasks will affect the accuracy and efficiency. Feature selection algorithm can select some features most relevant to text category and reduce the dimension of text representation vector. In this paper, we propose a new feature ranking metric as category distribution ratio (CDR) which takes the true positive rate and false positive rate and their difference of a term into account while estimating the significance of a term. To prove the effectiveness of the proposed feature selection algorithm, we compare its performance against six metrics (balanced accuracy measure (ACC), odds ratio (OR), Gini index (GI), max-min Ratio (MMR), normalized difference measure(NDM),chi-square (CHI)) on three benchmark data sets (20newsgropus, Ohsumed, Reuters 21578) using multinomial naive Bayes, support vector machines and k-nearest neighbor classifiers. The experimental results show that the classification evaluation index macro F1 based on CDR feature selection is higher than the other six algorithms.