1980
DOI: 10.1007/bf00934495
|View full text |Cite
|
Sign up to set email alerts
|

A geometric method in nonlinear programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
75
0

Year Published

1994
1994
2017
2017

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(77 citation statements)
references
References 24 publications
2
75
0
Order By: Relevance
“…Due to condition C 2 we have the complimentarity condition x i * v i * = 0, 1 ≤ i ≤ n. Hence we conclude that the pair x * , u * defined by (51) forms Kuhn-Tucker point in Problem (18).…”
Section: Barrier-newton Methods For Linear Programmingmentioning
confidence: 72%
See 2 more Smart Citations
“…Due to condition C 2 we have the complimentarity condition x i * v i * = 0, 1 ≤ i ≤ n. Hence we conclude that the pair x * , u * defined by (51) forms Kuhn-Tucker point in Problem (18).…”
Section: Barrier-newton Methods For Linear Programmingmentioning
confidence: 72%
“…In this section we apply barrier-Newton method (15) for solving linear programming Problem (18). In this case we have…”
Section: Barrier-newton Methods For Linear Programmingmentioning
confidence: 99%
See 1 more Smart Citation
“…But their concerns were far removed from the complexity of optimization algorithms. Also, differential and Riemannian geometry has been studied in optimization in relation to the need to remain feasible -see, e.g., Tanabe [11] and Edelman et al [1]. In this paper, however (at least in feasible methods), the algorithms move in the intersection of an affine manifold and the interior of the constraint set, and thus maintaining feasibility is not an issue.…”
Section: Introductionmentioning
confidence: 97%
“…Most of the available algorithms, such as the widely used distributed algorithms based on subgradient [14] and projected subgradient [15], are developed in discrete-time mainly due to the overwhelming ability of digital computers to execute the algorithms discretely. Recently, more and more distributed convex optimization algorithms are explored in continuous-time since continuous-time set up is favored for utilizing more techniques (the elegant Lyapunov argument in [4] for example) to prove the algorithm convergence, and is beneficial for adopting differential geometry viewpoint which is extremely powerful when the optimization is constrained (see for example [21]). …”
Section: Introductionmentioning
confidence: 99%