2014
DOI: 10.1016/j.neunet.2013.12.003
|View full text |Cite
|
Sign up to set email alerts
|

Lagrangian support vector regression via unconstrained convex minimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 35 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…As an advantage, SVR can be applied to not only linear regression functions but also nonlinear regression functions. Only the dot product of two vectors is required to solve the Lagrangian methods to obtain ( ) f x ; thus, this allows the determination of the dot product of two vectors in high dimensions that can be represented linearly by kernel functions [51,52]. There are several kernel functions such as the radial basis, polynomial, and sigmoid functions [53].…”
Section: Svrmentioning
confidence: 99%
“…As an advantage, SVR can be applied to not only linear regression functions but also nonlinear regression functions. Only the dot product of two vectors is required to solve the Lagrangian methods to obtain ( ) f x ; thus, this allows the determination of the dot product of two vectors in high dimensions that can be represented linearly by kernel functions [51,52]. There are several kernel functions such as the radial basis, polynomial, and sigmoid functions [53].…”
Section: Svrmentioning
confidence: 99%
“…Therefore, K‐StoNet is trained by solving a series of convex optimization problems . Note that the minimization in Equation (13) is known as a convex quadratic programming problem (Vapnik, 2000, 2013). Although solving the convex optimization problems is more expensive than a single gradient update, the IRO algorithm converges very fast, usually within tens of iterations. The major computational cost of K‐StoNet comes from the SVR step when the sample size is large.…”
Section: A Kernel‐expanded Stochastic Neural Networkmentioning
confidence: 99%
“…The system input and output of this flexible robot arm are the measured reaction torque and the acceleration, respectively [27]. There are two attributes in this flexible robot arm example, and all of the 1024 pairs of data are described in Figure 2.…”
Section: Robot Arm Examplementioning
confidence: 99%