1995
DOI: 10.1142/9789812795885_0018
|View full text |Cite
|
Sign up to set email alerts
|

A Neural Model for Category Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0
1

Year Published

1998
1998
2014
2014

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 27 publications
(30 citation statements)
references
References 0 publications
0
29
0
1
Order By: Relevance
“…In this way the network presents the first step in constructing a more interactive recognition system. In comparison to Nearest neighbor rule (Cover and Hart, 1967) and RCE network (Reilly et al, 1982;Scofield et al, 1987), our approach offers several advantages: it is computationally efficient, it is optimal in the Bayesian sense and there are no external parameters associated with proto-type regions. In addition, the algorithm for constructing prototype regions is extremely easy to implement and offers a unique partitioning of the feature space regardless of the order in which the training samples are presented to the system.…”
Section: Properties Of the System And Resultsmentioning
confidence: 99%
“…In this way the network presents the first step in constructing a more interactive recognition system. In comparison to Nearest neighbor rule (Cover and Hart, 1967) and RCE network (Reilly et al, 1982;Scofield et al, 1987), our approach offers several advantages: it is computationally efficient, it is optimal in the Bayesian sense and there are no external parameters associated with proto-type regions. In addition, the algorithm for constructing prototype regions is extremely easy to implement and offers a unique partitioning of the feature space regardless of the order in which the training samples are presented to the system.…”
Section: Properties Of the System And Resultsmentioning
confidence: 99%
“…(15). Among those methods listed, GDALR is the best when considering together all of the computation time and memory requirements, sensitivity to the initial centers and radii, and training and test performances.…”
Section: Gradient Descent Solution For the Spherical Classifier Designmentioning
confidence: 99%
“…III) Depending on the choice of kernel and the kernel's parameter, the distance order among the samples may not be preserved while carrying them into the feature space by the nonlinear mapping defined by the chosen kernel [12][13][14][15]. This means that the inverse image of the optimal separating hyperplane found in the feature space may be far away from the optimal separating surface in the input data space.…”
Section: Introductionmentioning
confidence: 99%
“…Step 4: Back propagation learning Mandic (2001), Reilly (1982 algorithm is selected for training the network…”
Section: 11prediction Of Thermo Physical Propertiesmentioning
confidence: 99%