2014
DOI: 10.1145/2641758
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic Reframing for Cost-Sensitive Regression

Abstract: Common day applications of predictive models usually involve a full use of the available contextual information. When the operating context changes, one may fine-tune the by-default (incontextual) prediction or may even abstain from predicting a value (a reject). Global reframing solutions, where the same function is applied to adapt the estimated outputs to a new cost context, is a possible solution here. An alternative approach, which has not been studied in a comprehensive way for regression in the knowledg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(34 citation statements)
references
References 57 publications
0
34
0
Order By: Relevance
“…One area we are undertaking is a modification of the LL approach where the aggregation function is substituted by a quantification procedure [11,2]. As quantification is able to correct some aggregation problems, we hope some quantification techniques (especially those for regression using crisp regression models [3] or soft regression models [15]) to be beneficial for the LL approach. The set of predictions from different approaches could also be used as an ensemble, hopefully leading to better results.…”
Section: Discussionmentioning
confidence: 99%
“…One area we are undertaking is a modification of the LL approach where the aggregation function is substituted by a quantification procedure [11,2]. As quantification is able to correct some aggregation problems, we hope some quantification techniques (especially those for regression using crisp regression models [3] or soft regression models [15]) to be beneficial for the LL approach. The set of predictions from different approaches could also be used as an ensemble, hopefully leading to better results.…”
Section: Discussionmentioning
confidence: 99%
“…However, we think that there are many other possibilities here to be explored, such as the exploration of techniques that try to improve MAE (such as the quantile regression explored here), the clipped MAE or A OCE , the use of soft classification methods or soft regression methods [19] under the reframing paradigm, labelling the training dataset using the median as fixed cutoff and leaving a mapping of classifier scores to cutoffs for deployment time (by using a table, an approximate function or a calibration technique). This could relate this problem with some threshold choice methods in classification, most especially the scoredriven and the ratedriven methods [20].…”
Section: Discussionmentioning
confidence: 99%
“…However, we can find some datasets (e.g. 7,8,14,15,17,18,19) where Logistic Regression is obtaining a bet-ter performance than J48 in the retraining scenario, whilst the situation is inverse in the reframing scenario. On the other hand, A PUCE approximates the MAE quite well for the Regr-LnR approach and for those datasets with a wide range of cutoffs (type 1, 3 and 5), being the values slightly different because the A PUCE is approximated with a finite number of points.…”
Section: Experiments Comparing Retraining and Reframingmentioning
confidence: 97%
“…Cost-Sensitive Models [6] recognize the asymmetry of misclassification by specifying the relative cost of predicting an instance as having a class value ofŶ when its true class value is Y (see [7] for one example of this formulation), and trying to minimize the overall cost of all predictions. In practice, such detailed cost information may be unavailable or hard to obtain [5].…”
Section: Related Workmentioning
confidence: 99%