2004
DOI: 10.1007/978-1-4419-8853-9
|View full text |Cite
|
Sign up to set email alerts
|

Introductory Lectures on Convex Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

77
4,771
3
15

Year Published

2009
2009
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 3,773 publications
(4,866 citation statements)
references
References 0 publications
77
4,771
3
15
Order By: Relevance
“…The theory and methods of unconstrained optimization are discussed in extensive detail in Dennis and Schnabel (1983), Gill, Murray and Wright (1981), Nash and Sofer (1996), Griva, Nash, and Sofer (2008), Nesterov (2004), and Nocedal and Wright (2006). For a guide to software for numerical optimization, see Moré and Wright (1993) and the online NEOS Optimization Software Guide.…”
Section: Discussionmentioning
confidence: 99%
“…The theory and methods of unconstrained optimization are discussed in extensive detail in Dennis and Schnabel (1983), Gill, Murray and Wright (1981), Nash and Sofer (1996), Griva, Nash, and Sofer (2008), Nesterov (2004), and Nocedal and Wright (2006). For a guide to software for numerical optimization, see Moré and Wright (1993) and the online NEOS Optimization Software Guide.…”
Section: Discussionmentioning
confidence: 99%
“…In this paper, we use an algorithm called NESTA to solve the convex optimization problem of (7). NESTA is developed by Stephen Becker, etc., for solving large-scale problems based on Nesterov's work [48][49][50]. It is a fast and robust first-order method that solves minimum 1 problems and a large number of extensions including TV minimization with the accelerated convergence rate of O(1/k 2 ).…”
Section: Compressive Sensing Approach Applied To Sairsmentioning
confidence: 99%
“…However, it is not differentiable in general, which precludes gradient ascent methods for its maximization. We can, however, use the projected supergradient method [13].…”
Section: (6)mentioning
confidence: 99%
“…Specifically, the method is easy to implement, but in general has worse convergence properties than first-order gradient-based methods. We refer the reader to [13,16] for details.…”
Section: (6)mentioning
confidence: 99%