2006
DOI: 10.1109/tnn.2006.879771
|View full text |Cite
|
Sign up to set email alerts
|

Locally Weighted Interpolating Growing Neural Gas

Abstract: In this paper, we propose a new approach to function approximation based on a growing neural gas (GNG), a self-organizing map (SOM) which is able to adapt to the local dimension of a possible high-dimensional input distribution. Local models are built interpolating between values associated with the map's neurons. These models are combined using a weighted sum to yield the final approximation value. The values, the positions, and the "local ranges" of the neurons are adapted to improve the approximation qualit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 13 publications
0
7
0
Order By: Relevance
“…4, this maneuver also violates both assumptions of small deviations and decoupled dynamics. Consequently, a classical controller (23) designed specifically for such flight conditions is suboptimal, as can be verified by evaluating the optimality condition (25). For comparison, the neural network controller is updated both with an unconstrained RPROP algorithm [17] and with a constrained RPROP algorithm implementing the adjoined gradient.…”
Section: Constrained Training Implementation and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…4, this maneuver also violates both assumptions of small deviations and decoupled dynamics. Consequently, a classical controller (23) designed specifically for such flight conditions is suboptimal, as can be verified by evaluating the optimality condition (25). For comparison, the neural network controller is updated both with an unconstrained RPROP algorithm [17] and with a constrained RPROP algorithm implementing the adjoined gradient.…”
Section: Constrained Training Implementation and Resultsmentioning
confidence: 99%
“…The ideal control performance over is stated through the optimality condition presented in [40] ( 25) where is the discretized form of (20). This equation constitutes a local condition for optimality that must be satisfied along the system trajectory.…”
Section: ) Online Learning and Short-term Control Knowledgementioning
confidence: 99%
See 1 more Smart Citation
“…GNG was originally meant to solve unsupervised learning problems (i.e. clustering and vector quantization); it was extended to supervised RBF networks [53,57] for the incremental generation of neuro-fuzzy systems [59].…”
Section: Model-building With a Modified Growing Neural Gas Networkmentioning
confidence: 99%
“…They have been combined with locally weighted learning methods presented hereafter [36]. The resulting algorithm is endowed with interesting properties for incremental function approximation in a large domain, but to our knowledge it has not been applied to the identification of mechanical models.…”
Section: Self-organizing Mapsmentioning
confidence: 99%