2015
DOI: 10.1016/j.ins.2014.08.045
|View full text |Cite
|
Sign up to set email alerts
|

Rank discrimination measures for enforcing monotonicity in decision tree induction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 28 publications
(25 citation statements)
references
References 29 publications
(79 reference statements)
0
25
0
Order By: Relevance
“…Some quantification of non‐monotonicity in a data set has been investigated in the works of Marsala and Petturiti and Milstein et al Suppose a data set X=false{xifalse}i=1N contains n condition attributes false{Akfalse}kn and one decision attribute D . X is not monotone consistent if it contains at least a pair x i , x h ∈Ω satisfying one of following conditions: ( i )( v ( A 1 , x i ),…, v ( A k , x i ))≤( v ( A 1 , x h ),…, v ( A k , x h )) and v ( D , x i )> v ( D , x h ); ( ii )( v ( A 1 , x i ),…, v ( A k , x i ))≥( v ( A 1 , x h ),…, v ( A k , x h )) and v ( D , x i )< v ( D , x h ). …”
Section: A Fast Rank Mutual Information Based Decision Treementioning
confidence: 99%
See 3 more Smart Citations
“…Some quantification of non‐monotonicity in a data set has been investigated in the works of Marsala and Petturiti and Milstein et al Suppose a data set X=false{xifalse}i=1N contains n condition attributes false{Akfalse}kn and one decision attribute D . X is not monotone consistent if it contains at least a pair x i , x h ∈Ω satisfying one of following conditions: ( i )( v ( A 1 , x i ),…, v ( A k , x i ))≤( v ( A 1 , x h ),…, v ( A k , x h )) and v ( D , x i )> v ( D , x h ); ( ii )( v ( A 1 , x i ),…, v ( A k , x i ))≥( v ( A 1 , x h ),…, v ( A k , x h )) and v ( D , x i )< v ( D , x h ). …”
Section: A Fast Rank Mutual Information Based Decision Treementioning
confidence: 99%
“…For monotonic classification problems, in order to make some comparisons with several different splitting rule based monotonic decision trees described in the work of Marsala and Petturiti, we employ the same data sets, that is, 7 pre‐processed UCI benchmark data sets with N M I 1 less than 12 %, as shown in Table . In the work of Marsala and Petturiti, all these data sets are pre‐processed by removing all the instances with missing values and applying WEKA filter Discretize to discretize real attributes; moreover, integer attributes have been converted to ordinal ones by using WEKA filter NumericToNominal , both contained in weka.filters.unsupervised.attribute .…”
Section: Experimental Studiesmentioning
confidence: 99%
See 2 more Smart Citations
“…The other algorithms are all supervised algorithms and have different advantages and disadvantages: LDA is effective in many classification problems [31]; GMSS is more robust than LDA when the subspace dimension is less than C À 1 (C is the number of categories) [44]; and DLA and SLPP are top-level manifold learning algorithms that consider both local geometric structure and discriminative information preservation from different perspectives.…”
Section: Baselines and Performance Evaluationmentioning
confidence: 99%