Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challeng 2000
DOI: 10.1109/ijcnn.2000.861312
|View full text |Cite
|
Sign up to set email alerts
|

A structure trainable neural network with embedded gating units and its learning algorithm

Abstract: Many problems solved by multilayer neural networks (MLNNs) are reduced into pattern mapping. If the mapping includes several different rules, it is difficult to solve these problems by using a single MLNN with linear connection weights and continuous activation functions.In this paper, a structure trainable neural network has been proposed. The gate units are embedded, which can be trained together with the connection weights. Pattern mapping problems, which include several different mapping rules, can be real… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2001
2001
2018
2018

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…To address this, one may think to perform data-space analyses only once, seeking to derive all local linear regression models for the whole of the data space, and use the derived models for all queries. Indeed, a literature survey reveals several methods like [12,42], which identify the nonlinearity of data function g and provide multiple local linear approximations. Unfortunately, these methods are very computationally expensive and thus do not scale with the size n of the data points.…”
Section: Related Workmentioning
confidence: 99%
“…To address this, one may think to perform data-space analyses only once, seeking to derive all local linear regression models for the whole of the data space, and use the derived models for all queries. Indeed, a literature survey reveals several methods like [12,42], which identify the nonlinearity of data function g and provide multiple local linear approximations. Unfortunately, these methods are very computationally expensive and thus do not scale with the size n of the data points.…”
Section: Related Workmentioning
confidence: 99%