Feature Selection (FS) is an imperative issue in data mining and machine learning. It is an inevitable task to shorter the number of features presented in the initial data set for better classification result, minimized computation time, and reduced memory consumption. In this article, a novel framework using Correlation Coefficient (CCE) and Symmetrical Uncertainty (SU) for selecting the subset of feature is proposed. The selected features are congregated into finite number of clusters by grading their CCE and comparing the SU values. In each cluster, a feature with maximum SU value is retained while remaining features in the same cluster are ignored. The proposed framework was examined with Ten(10) real time benchmark data sets. Experimental outcomes show that the proposed method is outruns than majority of conventional feature selection methods(Information Gain, Chi-Square, Gain Ratio, ReliefF) in accuracy. This method is tested using Tree Based, Rule Based, Lazy, and Naive Bayes learners.