Proceedings of the 2008 SIAM International Conference on Data Mining 2008
DOI: 10.1137/1.9781611972788.49
|View full text |Cite
|
Sign up to set email alerts
|

Generic Methods for Multi-criteria Evaluation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
2
2
1

Relationship

3
2

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 5 publications
0
10
0
Order By: Relevance
“…In addition to the ACC and AUC metrics we will evaluate each algorithm using the Candidate Evaluation Function (CEF) [23]. The purpose of CEF is to capture application-specific tradeoffs by combining multiple relevant metrics.…”
Section: Multi-criteria Evaluationmentioning
confidence: 99%
“…In addition to the ACC and AUC metrics we will evaluate each algorithm using the Candidate Evaluation Function (CEF) [23]. The purpose of CEF is to capture application-specific tradeoffs by combining multiple relevant metrics.…”
Section: Multi-criteria Evaluationmentioning
confidence: 99%
“…In previous work [7], we identified a number of attractive properties of existing multi-criteria evaluation metrics and presented a generic multi-criteria metric that was designed with these properties in mind. This metric, called the Candidate Evaluation Function (CEF), has the main purpose of combining an arbitrary number of individual metrics into a single quantity.…”
Section: Application-oriented Validation and Evaluationmentioning
confidence: 99%
“…As discussed in [7], one study proposes the SAR metric, which combines squared error, success rate, and AUC. This metric may be suitable for a certain application, however, it cannot be used to compare other criteria.…”
Section: Introductionmentioning
confidence: 99%
“…For this purpose we suggest the generic multi-criteria metric, CEF [18], which can be used to trade-off multiple evaluation metrics when evaluating or selecting between different learning algorithms or classifiers. Each included metric can be associated with an explicit weight and an acceptable range.…”
Section: Multi-criteria Evaluationmentioning
confidence: 99%
“…When analyzing the prevention tool requirements it is clear that we need to use evaluation metrics for accuracy (of classifying both good and bad applications), time (classification response time), and explainability (for visualization). Mapping the requirements to the available experiment data and making informed choices (see [18]) about bounds and explicit weighting, we can calculate the CEF score for all algorithms included in the experiment. This score is presented in the last column of Tab.…”
Section: Multi-criteria Evaluationmentioning
confidence: 99%