2006
DOI: 10.1007/s10994-006-6266-6
|View full text |Cite
|
Sign up to set email alerts
|

Classification-based objective functions

Abstract: Backpropagation, similar to most learning algorithms that can form complex decision surfaces, is prone to overfitting. This work presents classification-based objective functions, an approach to training artificial neural networks on classification problems. Classification-based learning attempts to guide the network directly to correct pattern classification rather than using common error minimization heuristics, such as sum-squared error (SSE) and cross-entropy (CE), that do not explicitly minimize classific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2006
2006
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(14 citation statements)
references
References 17 publications
0
14
0
Order By: Relevance
“…Classification-based (CB) error functions [109] heuristically seek to directly minimize classification error by backpropagating network error only on misclassified patterns. In so doing, they perform relatively minimal updates to network parameters in order to discourage premature weight saturation and overfitting.…”
Section: Criterion Functionsmentioning
confidence: 99%
“…Classification-based (CB) error functions [109] heuristically seek to directly minimize classification error by backpropagating network error only on misclassified patterns. In so doing, they perform relatively minimal updates to network parameters in order to discourage premature weight saturation and overfitting.…”
Section: Criterion Functionsmentioning
confidence: 99%
“…For regression tasks, in which the objective is to approximate the function of an arbitrary signal, this presumption often holds. However, this assumption may be invalid for some classification tasks, where other error metrics such as cross-entropy [38] or maximal class margin [39] may be more suited. Our paper focuses on regression-type problems.…”
Section: Overview Of Type-2 Fuzzy Logic Systemsmentioning
confidence: 99%
“…Prior work has shown [8][9][10] that methods of calculating softer values for each training pattern based on the network's output vector improve generalization and reduce variance on classification problems over a corpus of benchmark learning problems. One of these, called lazy training or CB1, focuses on classification accuracy backpropagates an error signal through the network only when a pattern is misclassified.…”
Section: Motivation For Cb3mentioning
confidence: 99%
“…Classification-based (CB) error functions [9,10] are a relatively new method of training multi-layer perceptrons. The CB functions heuristically seek to directly minimize classification error by backpropagating network error only on misclassified patterns.…”
Section: Introductionmentioning
confidence: 99%