2022
DOI: 10.2514/1.g006806
|View full text |Cite
|
Sign up to set email alerts
|

Convex Approach to Covariance Control with Application to Stochastic Low-Thrust Trajectory Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 24 publications
(9 citation statements)
references
References 41 publications
1
8
0
Order By: Relevance
“…While the equivalence between the state history feedback control policy in (44) and the disturbance feedback policy (9) has been established in Reference 32, we extend this result by showing that the auxiliary variable parametrization (51) is also equivalent to the other parametrizations, and we provide a bijective transformation between the parameters of the different policies. Proposition 7 summarizes the necessary and sufficient conditions for the control policies to generate the same input under the same w. Proposition 7.…”
Section: Equivalence Of Different Policy Parametrizationssupporting
confidence: 68%
See 1 more Smart Citation
“…While the equivalence between the state history feedback control policy in (44) and the disturbance feedback policy (9) has been established in Reference 32, we extend this result by showing that the auxiliary variable parametrization (51) is also equivalent to the other parametrizations, and we provide a bijective transformation between the parameters of the different policies. Proposition 7 summarizes the necessary and sufficient conditions for the control policies to generate the same input under the same w. Proposition 7.…”
Section: Equivalence Of Different Policy Parametrizationssupporting
confidence: 68%
“…An additional important observation is that the auxiliary variable feedback policy provided in (51) with š›¾ = 1 is identical to the state feedback policy when there is no noise, i.e., when W t = 0 for all t. In contrast, the disturbance feedback control policy with š›¾ = 1 corresponds to the open-loop control policy in this noise-free scenario, and consequently, the covariance cannot be steered by the disturbance feedback policy in this instance. The following proposition formally articulates the first assertion regarding the auxiliary feedback policy.…”
Section: Control Policy Parameterization Based On Truncated Historiesmentioning
confidence: 99%
“…order to ensure that only the additive term v k appears in the mean dynamics and overcome the bilinearities in the covariance constraint by utilizing the symmetry of the covariance and performing a change of variables to create a semidefinite program [9], [11], [21]. However, due to the state-dependent nature of the multiplicative disturbances, it is not possible to remove the feedback policy from all realizations of (13) because the state mean is not independent of the disturbances.…”
Section: B Solution Methodologymentioning
confidence: 99%
“…Note that Ī³ * , Ī¶ * is therefore a solution of Problem ( 21) for a particular linearization point, which we denote by Ī³ * , Ī¶ * . We introduce the following theorem regarding the validity of this solution, which is found using the convex local approximate problem (21), in relation to the original covariance steering problem (10). (21), it is also a stationary point of Problem (16).…”
Section: B Solution Methodologymentioning
confidence: 99%
See 1 more Smart Citation