2001
DOI: 10.1016/s0005-1098(01)00122-4
|View full text |Cite
|
Sign up to set email alerts
|

Randomized algorithms for robust controller synthesis using statistical learning theory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
100
0

Year Published

2002
2002
2019
2019

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 202 publications
(100 citation statements)
references
References 20 publications
0
100
0
Order By: Relevance
“…Thus, in finite time the control scheme is robust, and it progressively becomes better performing. The use of average control cost criteria was originally proposed in [11] in the context of robust control and then extended to the adaptive control context in [12]. In these references, randomized algorithms are used to make the minimization of the average control cost computationally tractable.…”
Section: ])mentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, in finite time the control scheme is robust, and it progressively becomes better performing. The use of average control cost criteria was originally proposed in [11] in the context of robust control and then extended to the adaptive control context in [12]. In these references, randomized algorithms are used to make the minimization of the average control cost computationally tractable.…”
Section: ])mentioning
confidence: 99%
“…For many control objectives, the integrand function J(ϑ, γ) cannot be computed in a closed-form and even the evaluation of J(ϑ, γ) for a given pair (ϑ, γ) may be time consuming. The approach adopted here to overcome this difficulty follows to a large extent the ideas in [11,12] and is based on the use of randomized methods. The resulting minimizers are not rigorously optimal (i.e., they do not minimize E Pt [J(ϑ, γ)] with probability 1).…”
Section: Compute M T and V T Through The Kalman Filter Equationsmentioning
confidence: 99%
“…Ray and Stengel (1993); Tempo and Dabbene (1999); Vidyasagar and Blondel (2001) and Vidyasagar (2001). The main idea behind randomized methods is to associate a probability distribution to the uncertainty set, and to assess system performance in terms of empirical probability.…”
Section: A Probabilistic Framework For Uncertain Systemsmentioning
confidence: 99%
“…E-mail addresses: calaÿore@polito.it (G. Calaÿore), dabbene@ polito.it (F. Dabbene). Blondel, 2001;Vidyasagar, 2001). It turns out that problems that are computationally hard in a deterministic setting may be e ciently solved using randomized algorithms, if a certain probability of performance degradation is accepted (Khargonekar & Tikku, 1996;Tempo, Bai, & Dabbene, 1997;Vidyasagar, 1997).…”
Section: Introductionmentioning
confidence: 99%
“…Classical robustness analysis usually considers the worst-case scenario of uncertainties, meaning that the stability and desired performance characteristics should always be satisfied with the parameters within the largest uncertain range. However, in practice, if the worst case occurs quite rarely, the designed controller is often too conservative and its performance is not satisfactory [10], [12]. On the other hand, by assuming a probabilistic distribution of the parameter uncertainties, the probability that a specific performance is satisfied can be evaluated by randomized algorithms.…”
Section: Algorithms For Jump Linear Systemsmentioning
confidence: 99%