2013
DOI: 10.1007/978-3-8274-2949-0
|View full text |Cite
|
Sign up to set email alerts
|

Nichtlineare Optimierung

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…(2.8) Assuming (2.8) to be a continuous and convex function, ρ * can be determined using numerical optimization strategies such as gradient descent [HS06;RHG13] or the Nelder-Mead method, also called downhill simplex method [NM65;Pow62]. These try to estimate a function's global minimum (or maximum) using multiple iterations, like known from Newton's method [New67].…”
Section: Chapter 2 Probabilistic Sensor Modelsmentioning
confidence: 99%
“…(2.8) Assuming (2.8) to be a continuous and convex function, ρ * can be determined using numerical optimization strategies such as gradient descent [HS06;RHG13] or the Nelder-Mead method, also called downhill simplex method [NM65;Pow62]. These try to estimate a function's global minimum (or maximum) using multiple iterations, like known from Newton's method [New67].…”
Section: Chapter 2 Probabilistic Sensor Modelsmentioning
confidence: 99%
“…In an abstract sense, the Frobenius norm of a matrix A ∈ R n×m + is equal to the Euclidean distance of a vector a ∈ R n•m + . To be more precise the Frobenius norm is the square root of the sum of all squared matrix elements (Reinhardt et al, 2013):…”
Section: Error Functionmentioning
confidence: 99%
“…Besides this general definition there do exist alternative representations, among others the representation using the trace of a matrix (Reinhardt et al, 2013):…”
Section: Error Functionmentioning
confidence: 99%
“…The optimisation problem which needs to be solved for each agent is a quadratic optimisation problem with complementarity constraints. The objective function and the constraints (4) and 5 with a penalty factor r. This optimisation problem has the same optimum as the original optimisation problem for r → ∞ [23]. In [22] [24] [25] it was shown that the above approach converges despite of the complementarity constraints which violate the linear independence constraint qualification.…”
Section: Bargaining Algorithmmentioning
confidence: 99%
“…There are various ways to solve such kind of problems, e.g. interior point method, augmented Lagrangian methods, or conditional gradient method [23] [26] [27]. In this paper, the conditional gradient method [28] is used, because it can be easily obtained by extending the simplex algorithm, see e.g.…”
Section: Bargaining Algorithmmentioning
confidence: 99%