Abstract. Decision trees that are based on information-theory are useful paradigms for learning from examples. However, in some real-world applications, known information-theoretic methods frequently generate nonmonotonic decision trees, in which objects with bettet attribute values are sometimes classified to lower classes than objects with inferior values. This property is undesirable for problem solving in many application domains, such as credit scoring and insurance premium determination, Where monotonicity of subsequent classifications is important. An attribute-selection metric is proposed here that takes both the error as weil as monotonicity into account while building decision trees. The metric is empirically shown capable of significantly reducing the degree of non-monotonicity of decision-trees without sacrificing their inductive accuracy.
Ordinal reasoning plays a major role in human cognition. This paper identifies an important class of classification problems of patterns taken from ordinal domains and presents efficient, incremental algorithms for learning the classification rules from examples. We show that by adopting a monotonicity assumption of the output with respect to the input, inconsistencies among examples can be easily detected and the number of possible classification rules substantially reduced. By adopting a conservative classification criterion, the required number of rules further decreases. The monotonicity and conservatism of the classification also enable the resolution of conflicts among inconsistent examples and the graceful handling of don't knows and don't cares during the learning and classification phases. Two typical examples in which the suggested classification model works well are given. The first example is taken from the financial domain and the second from machining.Le raisonnement ordinal joue un rSle important dans le traitement cognitif de I'information. Cet article traite d'une classe importante de problemes de classification des formes provenant de domaines ordinaux, et prisente des algorithmes incrkmentiels efficaces pour l'apprentissage des regles de classification a partir d'exemples. Les auteurs dimontrent qu'en adoptant une hypothese de monotoniciti de la sortie en fonction de I'entrie, les incohirences dans les exemples peuvent itre facilement detecties et le nombre de rkgles de classification possibles considirablement rCduit. En choisissant un critere de classification prudent, le nombre requis de rkgles decroit davantage. La monotoniciti et la prudence de la classification permettent egalement la solution de conflits parmi les exemples incoherents, ainsi que le traitement habile des incertitudes et des indiffirences durant les phases d'apprentissage et de classification. Deux exemples classiques dans lesquels le modele de classification propose fonctionne sont fournis. Le premier exemple est tire du domaine financier et le second de l'informatique.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.