As a filter model, rough set-based methods are one of effective attribute reduction(also called feature selection) that preserve the meaning of the features. In rough set theory, researchers mainly focus on extension of the classical rough set model(also called Pawlak model for short) and development of efficient attribute reduction algorithms. However, very little work has been done for aiming on the evaluation of the quality of attribute reduction, except that employing the cardinality of the given attribute subset P and so-called approximation quality of P or other equivalent criteria induced by Pawlak model. Although this discrimination strategy is simple and effective in most cases, it is very difficulty to guarantee the selected attribute reduction(s) from lots of attribute reductions are the best or Top n, especially for the case containing many attribute reductions with same cardinality and approximation quality. Therefore, in this paper, we incorporate margin criteria into the proposed evaluation mechanism for guaranteeing the effectiveness of the selected attribute subsets, since margin, originally designed for binary classification problem using support vector machine, can actually determine the generalization ability. Also, an improved discernibility function-based algorithm is proposed. To further test the effectiveness of the proposed method, the algorithm of this paper is experimented using UCI benchmark datasets. Preliminary experimental results show that the attribute reductions with larger margin have better or comparable performance than those with relatively small margin for all reducts with same cardinality. Thus, our newly developed method can, in most cases, get more effective attribute subsets.