1994
DOI: 10.1109/72.286926
|View full text |Cite
|
Sign up to set email alerts
|

An iterative method for training multilayer networks with threshold functions

Abstract: Concerns the problem of finding weights for feed-forward networks in which threshold functions replace the more common logistic node output function. The advantage of such weights is that the complexity of the hardware implementation of such networks is greatly reduced. If the task to be learned does not change over time, it may be sufficient to find the correct weights for a threshold function network off-line and to transfer these weights to the hardware implementation. This paper provides a mathematical fou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

1996
1996
2015
2015

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(25 citation statements)
references
References 1 publication
0
25
0
Order By: Relevance
“…Approaching hard limiting thresholds by increasing the gain of the activation functions (Corwin et al 1994Yu et al 1994) is similar to multipling the weights with a constant greater than one. In the nal stage of the training process the activation functions can be replaced by a threshold if this does not cause a degradation in performance.…”
Section: Extensions and Applications Of Theoremmentioning
confidence: 99%
“…Approaching hard limiting thresholds by increasing the gain of the activation functions (Corwin et al 1994Yu et al 1994) is similar to multipling the weights with a constant greater than one. In the nal stage of the training process the activation functions can be replaced by a threshold if this does not cause a degradation in performance.…”
Section: Extensions and Applications Of Theoremmentioning
confidence: 99%
“…not to change over time, in order to train the network "off-line" in a software simulation and later transfer it to the hardware [1,4]. But many real-life applications may not be static, i.e.…”
Section: Introductionmentioning
confidence: 99%
“…In this section, we present comparative results for the DE algorithm, and the algorithms proposed in [4], [14], [3], [6], which are denoted in the tables below as (GLO), (T), (GZ), and (MVGA), respectively.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…In [4], Corwin suggested to train NDAs with progressively steeper analog functions to facilitate training. Thus, in his experiments he used values such as β ∈ {2, 3, 5, 10} to alter the shape of the sigmoid from time to time during training.…”
Section: Training Methods For Network With Discrete Activationsmentioning
confidence: 99%
See 1 more Smart Citation