1995
DOI: 10.1007/bf00994659
|View full text |Cite
|
Sign up to set email alerts
|

Monotonicity maintenance in information-theoretic machine learning algorithms

Abstract: Abstract. Decision trees that are based on information-theory are useful paradigms for learning from examples. However, in some real-world applications, known information-theoretic methods frequently generate nonmonotonic decision trees, in which objects with bettet attribute values are sometimes classified to lower classes than objects with inferior values. This property is undesirable for problem solving in many application domains, such as credit scoring and insurance premium determination, Where monotonici… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
85
0
1

Year Published

2000
2000
2021
2021

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 97 publications
(86 citation statements)
references
References 12 publications
0
85
0
1
Order By: Relevance
“…in machine learning [12,28,90] and fuzzy modelling [128,155,161]. However, real-life data is often imperfect and does not fully comply with the monotonicity hypothesis.…”
Section: A Ranking Rule Based On Monotonicitymentioning
confidence: 99%
See 1 more Smart Citation
“…in machine learning [12,28,90] and fuzzy modelling [128,155,161]. However, real-life data is often imperfect and does not fully comply with the monotonicity hypothesis.…”
Section: A Ranking Rule Based On Monotonicitymentioning
confidence: 99%
“…(6.11). In terms of the variables of the ILP problem, this cost is given by 12) where the values o i represent the corresponding values of the votrix induced by the given profile of rankings.…”
Section: Search For a Monotone (Quasi)votrixmentioning
confidence: 99%
“…Most notably, it combines three features in a non-trivial way, namely monotonicity, nonlinearity and interpretability. As for the first, a monotone dependence between the input and output attributes is often desirable in a classification setting and sometimes even requested by the application [11,12,13]. At the same time, the Choquet integral also allows for modeling interactions between different attributes in a flexible, nonlinear way.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper we present a new algorithm, called ICT, for learning monotone classification trees for problems with ordered class labels. Our approach differs from earlier monotone tree algorithms such as [5,18,11] in that we adjust the probability estimates in the leaf nodes in case of a violation. This is done in such a way that, subject to the monotonicity constraint, the sum of absolute prediction errors on the training sample is minimized.…”
Section: Introductionmentioning
confidence: 99%