Associative Classification (AC) is envisioned as one of the most attractive classification approaches to facilitate managers for prediction and decision making in a highly accurate and easily interpretable way. The most of existing AC algorithms mainly focus on the two static metrics of association rules: support and confidence. However, in this paper, we identify a potential limitation of the confidence and further point out these AC algorithms merely consider the explicit expression of classification information and knowledge of Class Association Rules (CARs) and neglect the implicit expression of classification information and knowledge of CARs. And even the explicit expression of classification information and knowledge of CARs is equivalent, the implicit expression of classification information and knowledge of them may be different. Thus, these CARs will cause different influence on prediction or decision-making in terms of the interpretability and rationality of classification, and will result in predictive bias or decision-making bias in practice. In response to this drawback, we introduce the notion of information entropy and propose an innovative approach for associative classification based on information entropy of frequent attribute set, named EAC. Different from the existing schemes, the proposed EAC algorithm enjoys the following promising merits: (1) lower predictive bias or decision-making bias by levering the information entropy of frequent attribute set; (2) better interpretability and rationality by setting higher support threshold; (3)setting global optimum parameter dynamically through repeated trials with reducing the dimension of data sets. Experiments on 20 well-known benchmark data sets demonstrate that our EAC approach is highly competitive to other state-of-the-art AC algorithms in terms of predictive bias or decision-making bias, interpretability, and efficiency, which can be used to construct a new classifier efficiently and effectively in many realistic scenarios. INDEX TERMS Data mining, associative classification, information entropy, frequent attribute set.