2012
DOI: 10.1007/s10994-012-5318-3
|View full text |Cite
|
Sign up to set email alerts
|

Learning monotone nonlinear models using the Choquet integral

Abstract: The learning of predictive models that guarantee monotonicity in the input variables has received increasing attention in machine learning in recent years. By trend, the difficulty of ensuring monotonicity increases with the flexibility or, say, nonlinearity of a model. In this paper, we advocate the so-called Choquet integral as a tool for learning monotone nonlinear models. While being widely used as a flexible aggregation operator in different fields, such as multiple criteria decision making, the Choquet i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 80 publications
(43 citation statements)
references
References 32 publications
0
43
0
Order By: Relevance
“…Methods of the second paradigm are statistical. One can mention as an example the extension of Logistic Regression to utility models incorporating interaction among criteria (Fallah Tehrani et al, 2012. Here the preference information is put into the function to optimize, and the global problem to solve is often a convex problem under linear constraints, mostly monotonicity conditions.…”
Section: Significance Of the Main Theoremmentioning
confidence: 99%
“…Methods of the second paradigm are statistical. One can mention as an example the extension of Logistic Regression to utility models incorporating interaction among criteria (Fallah Tehrani et al, 2012. Here the preference information is put into the function to optimize, and the global problem to solve is often a convex problem under linear constraints, mostly monotonicity conditions.…”
Section: Significance Of the Main Theoremmentioning
confidence: 99%
“…In [26] Choquet built on the same idea to create the tool now called the Choquet integral; it was revived by Schmeidler [119,120]; its theoretical properties were developed e.g. by Greco [59], Groes et al [61], König [72]; it has found numerous applications, as in statistics and data mining (see Murofushi and Sugeno [95], Grabisch [56], Wang, Leung, and Klir [132], Fallah Tehrani et al [44]), game theory and mathematical economics (see Gilboa and Schmeidler [52], Heilpern [65]), decision theory (see Chateauneuf [24], Grabisch [54,55], Grabisch and Roubens [58], Grabisch and Labreuche [57], Mayag, Grabisch, and Labreuche [86]), insurance and finance (see Chateauneuf, Kast, and Lapied [25], Castagnoli, Maccheroni, and Marinacci [20]).…”
Section: The Idempotent Integralmentioning
confidence: 99%
“…• As opposed to many other models used in machine learning, the Choquet integral guarantees monotonicity in all criteria [49]. This is a reasonable property of a utility function which is often required in practice.…”
Section: Learning To Rank Using the Choquet Integralmentioning
confidence: 99%