1996
DOI: 10.1016/0893-6080(95)00081-x
|View full text |Cite
|
Sign up to set email alerts
|

A Numerical Implementation of Kolmogorov's Superpositions

Abstract: Hecht-Nielsen proposed a feedforward neural network based on Kolmogorov's superpositionsf(x(i), em leader,x(n))= summation operator q=02piPhi(q)(y(q)) that apply to all real valued continuous functions f(x(1), em leader, x(n)) defined on a Euclidean unit cube of dimension n >/= 2. This network has a hidden layer that is independent of f and that transforms the n-tuples (x(1), em leader, x(n)) into the 2n + 1 variables y(q), and an output layer in which f is computed. Kůrková has shown that such a network has a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2005
2005
2015
2015

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(35 citation statements)
references
References 4 publications
0
35
0
Order By: Relevance
“…Besides, the MLP agent was trained taking into account the inputs for each of the sensors and with a value of −1 for the rest of the inputs. The hidden layer of the MLP is designed with 2 + 1 [41] neurons, where is the number of neurons in the input layer. The output layer is composed of three neurons that correspond with the coordinates ( , , ).…”
Section: Results and Conclusionmentioning
confidence: 99%
“…Besides, the MLP agent was trained taking into account the inputs for each of the sensors and with a value of −1 for the rest of the inputs. The hidden layer of the MLP is designed with 2 + 1 [41] neurons, where is the number of neurons in the input layer. The output layer is composed of three neurons that correspond with the coordinates ( , , ).…”
Section: Results and Conclusionmentioning
confidence: 99%
“…Decision-tree methods and greedy search heuristics are used to construct g i and h ij functions based on training data in an attempt to make the superposition equation a good predictor. The approach contrasts with previous work on direct application of the superposition theorem (Neruda, Štědrý, & Drkošová, 2000;Sprecher 1996Sprecher , 1997Sprecher , 2002. One difficulty with direct application is that the g i and h ij functions that need to be constructed are extremely complex and entail very large computational overheads to implement, even when the target function is known (Neruda, Štědrý, & Drkošová, 2000).…”
Section: Introductionmentioning
confidence: 52%
“…In order to avoid local minima and saddle points entirely, additional research is needed to further improve the transform regression algorithm. Several authors (Kůrková, 1991(Kůrková, , 1992Neruda, Štědrý, & Drkošová, 2000;Sprecher 1996Sprecher , 1997Sprecher , 2002 have been investigating ways of overcoming the computational problems of directly applying Kolmogorov's theorem. Given the strength of the results obtain here using the form of the superposition equation alone, research aimed at creating a combined approach could potentially be quite fruitful.…”
Section: Discussionmentioning
confidence: 99%
“…We will now describe such a process. We start with a certain number of clusters in an initial set (say) N = N 0 , belonging to the actual configuration of N = N f clusters and then choose an initial set of planes q = q 0 , to separate these N 0 , as a start 8 we will assume N 0 << 2 q 0 . We will then include more clusters into this set and at the same time choosing an additional plane (or planes) to separate the new arrivals of clusters from one another and from those which are already in this set.…”
Section: The Methods Of Orientation Vectors Is Not Np Hardmentioning
confidence: 99%