1999
DOI: 10.1109/3477.740171
|View full text |Cite
|
Sign up to set email alerts
|

Linear Hopfield networks and constrained optimization

Abstract: It is shown that a Hopfield neural network (with linear transfer functions) augmented by an additional feedforward layer can be used to compute the Moore-Penrose generalized inverse of a matrix. The resultant augmented linear Hopfield network can be used to solve an arbitrary set of linear equations or, alternatively, to solve a constrained least squares optimization problem. Applications in signal processing and robotics are considered. In the former case the augmented linear Hopfield network is used to estim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(17 citation statements)
references
References 9 publications
0
17
0
Order By: Relevance
“…A Hopfield network with linear transfer functions augmented by an additional feedforward layer can be used to solve a set of linear equations [62] and to compute the pseudoinverse of a matrix [39]. The resultant augmented linear Hopfield network can be used to solve constrained LS optimization problems.…”
Section: Solving Other Optimization Problemsmentioning
confidence: 99%
“…A Hopfield network with linear transfer functions augmented by an additional feedforward layer can be used to solve a set of linear equations [62] and to compute the pseudoinverse of a matrix [39]. The resultant augmented linear Hopfield network can be used to solve constrained LS optimization problems.…”
Section: Solving Other Optimization Problemsmentioning
confidence: 99%
“…Nonlinear optimization problems, however, are nonetheless quadratic to second order around the local vicinity of the optimum. Therefore, Quadratic Programming (QP)-which finds the minima/maxima quadratic functions of variables subject to linear constraints [183]-becomes an effective first pass at such problems, and can be applied to a wide array of applications. For example, many machine learning problems, such as support vector machine (SVM) training and least squares regression, can be reformulated in terms of a QP problem.…”
Section: Nonlinear Programmingmentioning
confidence: 99%
“…In all these situations, numerical methods are usually required. These methods include cyclic coordinate descent methods [11], the Levenberg-Marquardt damped least squares methods [12], [13], quasi-Newton and conjugate gradient methods [14], [11], [15], neural network and artificial intelligence methods [16], [17], [18], [19], [20], [21], genetic algorithms [22], pseudo-inverse methods [23] and Jacobian transpose methods [24], [25].…”
Section: Background and Related Workmentioning
confidence: 99%