Abstract:Many real-world problems require the development and application of algorithms that automatically generate human interpretable knowledge from historical data. Most existing algorithms for rule induction from imprecise data have followed the precise approach, where definitions of the fuzzy sets that are intended to capture certain vague concepts are allowed to be modified such that they fit the data. These approaches typically destroy the original semantics or meaning of the given fuzzy sets, which often leads to loss of transparency in the resulting model or models. In order to overcome this fundamental limitation, a descriptive approach has been proposed in which human defined fuzzy sets are not allowed to be modified. However, as the fuzzy set definitions cannot be modified, and only a small number of them are normally available, only a limited number of possible rules are derivable. Such rules are not very flexible and in many cases, will not necessarily fit the data well. To address this important issue, at least partially, linguistic hedges have been introduced to provide a more adaptable means of learning from data, thereby offering more flexibility in domain knowledge representation and extraction. Following this approach, this paper presents a novel rule induction mechanism which extends a classifier system (XCS) by employing linguistic hedges. The resultant fuzzy XCS classifier with linguistic hedges is evaluated against a real-world forensic glass classification problem. The results demonstrate that the inclusion of hedges to support finer granularity in linguistic fuzzy modelling improves the accuracy of the resulting classifiers, whilst simultaneously preserving the interpretability of the learned models. This approach not only offers the user rules to decide on classes, but also rules to decide which classes to discard. It also inherits from XCS, the ability to deal with data that involves imbalanced classes.