2015
DOI: 10.5120/ijca2015907021
|View full text |Cite
|
Sign up to set email alerts
|

Some Theorems for Feed Forward Neural Networks

Abstract: This paper introduces a new method which employs the concept of "Orientation Vectors" to train a feed forward neural network. It is shown that this method is suitable for problems where large dimensions are involved and the clusters are characteristically sparse. For such cases, the new method is not NP hard as the problem size increases. We 'derive' the present technique by starting from Kolmogrov's method and then relax some of the stringent conditions. It is shown that for most classification problems three… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…By using the above method we are able to obtain planes which separate clusters after they have been discovered by the "Cluster Discovery" algorithm. By using the methods described in Ref [18] we will be determine the neural architecture which can classify the clusters. But very importantly in our case the coefficients of all the planes are already known, hence the weights of the processing elements are already determined.…”
Section: Methods 2: Classification By Using a Cluster Discovery Algor...mentioning
confidence: 99%
“…By using the above method we are able to obtain planes which separate clusters after they have been discovered by the "Cluster Discovery" algorithm. By using the methods described in Ref [18] we will be determine the neural architecture which can classify the clusters. But very importantly in our case the coefficients of all the planes are already known, hence the weights of the processing elements are already determined.…”
Section: Methods 2: Classification By Using a Cluster Discovery Algor...mentioning
confidence: 99%
“…Perceptron with one hidden layer architecture is sufficient for any classification problem, and several studies confirm it [25]. However, for a continuous function approximation problem, one hidden layer did not prove to be better than a second-degree polynomial [23].…”
Section: Perceptron Configurationmentioning
confidence: 99%
“…It can avoid manual extraction of expert features [5] . Eswaran et al [6] proved that a classification problem is always solvable with a suitable feed forward neural network model containing three hidden layers. So ,the structure of neural network model is shown in Figure 1.…”
Section: Introductionmentioning
confidence: 99%