2010 3rd International Congress on Image and Signal Processing 2010
DOI: 10.1109/cisp.2010.5646324
|View full text |Cite
|
Sign up to set email alerts
|

Error analysis of classifiers in machine learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…The results from each time of prediction were aggregated, and the final result was nominated based on the condition defined to select the aggregated results. The conditions were set after error analysis (Ding and Sheng 2010;Bannach-Brown et al 2019) on particular cases of misclassification and aimed to produce the highest accuracy result. The purpose of error analysis is to manually examine the examples of misclassified result and find the systematic trend in which type of examples that the algorithm is making an error on.…”
Section: Error Analysis and Ensemble Methodsmentioning
confidence: 99%
“…The results from each time of prediction were aggregated, and the final result was nominated based on the condition defined to select the aggregated results. The conditions were set after error analysis (Ding and Sheng 2010;Bannach-Brown et al 2019) on particular cases of misclassification and aimed to produce the highest accuracy result. The purpose of error analysis is to manually examine the examples of misclassified result and find the systematic trend in which type of examples that the algorithm is making an error on.…”
Section: Error Analysis and Ensemble Methodsmentioning
confidence: 99%
“…Analyzing and learning from errors in prediction, which are concerns in many works [6], mostly have been applied in detecting and predicting incorrect predictions in order to minimize its cost or in building a more accurate prediction model. Yet, less work directly focuses on how to explain the prediction errors.…”
Section: Prediction Errors Analysismentioning
confidence: 99%
“…Most of the work in evaluating the performance of predictive models has focused on improving the accuracy of the model rather than interpretability [4]. This led to building more complex classifiers such as ensembles [5], support vector machines [6] and kernel-based learning methods [7], known as black-box models, which tend to have high predictive accuracy, but less interpretability for the users [8] [9,10]. On the other hand, white-box classifiers, such as decision trees, Naïve Bayes, k-nearest neighbors, and logistic regression, help the users more in understanding the decisions that made by the classifiers.…”
Section: Introductionmentioning
confidence: 99%