2005
DOI: 10.1007/11564096_70
|View full text |Cite
|
Sign up to set email alerts
|

Classification of Ordinal Data Using Neural Networks

Abstract: Abstract.Many real life problems require the classification of items in naturally ordered classes. These problems are traditionally handled by conventional methods for nominal classes, ignoring the order. This paper introduces a new training model for feedforward neural networks, for multiclass classification problems, where the classes are ordered. The proposed model has just one output unit which takes values in the interval [0,1]; this interval is then subdivided into K subintervals (one for each class), ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0
1

Year Published

2010
2010
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 32 publications
(27 citation statements)
references
References 2 publications
0
26
0
1
Order By: Relevance
“…To adapt neural networks to the ordinal case structure, targets were reformulated following the OneVsFollowers approach and the prediction phase was accomplished by considering that, under its constrained entropic loss formulation, the output of the q-th output neuron estimates the probability that q and q − 1 events are both true. This methodology was further evaluated and compared in other works [64], [75], [76].…”
Section: Multiple-output Single Model Approachesmentioning
confidence: 99%
“…To adapt neural networks to the ordinal case structure, targets were reformulated following the OneVsFollowers approach and the prediction phase was accomplished by considering that, under its constrained entropic loss formulation, the output of the q-th output neuron estimates the probability that q and q − 1 events are both true. This methodology was further evaluated and compared in other works [64], [75], [76].…”
Section: Multiple-output Single Model Approachesmentioning
confidence: 99%
“…Based on the unimodal paradigm presented in [13], [14] we extended it onto the SVM context using all-at-once strategies. This paradigm states that the probabilities outputted by a prediction method should increase monotonically until reaching a maximum value and then decrease monotonically.…”
Section: Discussionmentioning
confidence: 99%
“…Here we recover the idea of the unimodal paradigm presented in [13], [14]. In the presence of a supervised multiclassification problem where the classes are ordered, like for instance the four classes in [15], Excellent > Good > Fair > Poor, if for a particular instance the class with highest a posteriori probability is Fair, then its neighbouring classes, Good and Poor, should have the second and third highest probabilities.…”
Section: Unimodal Paradigmmentioning
confidence: 99%
See 2 more Smart Citations