2021
DOI: 10.1117/1.jatis.7.3.039002
|View full text |Cite
|
Sign up to set email alerts
|

Self-optimizing adaptive optics control with reinforcement learning for high-contrast imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
19
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 0 publications
0
19
0
Order By: Relevance
“…An up-and-coming field of research aimed at improving AO control methods is the application of fully data-driven control methods, where the control voltages are separately added to the learned control model (Nousiainen et al 2021;Landman et al 2020Landman et al , 2021Haffert et al 2021a,b;Pou et al 2022). A significant benefit of fully data-driven control in closed-loop is that it does not require an estimate of the system's open-loop temporal evolution and that it is, therefore, insensitive to pseudo-open-loop reconstruction errors, such as the optical gain effect (Haffert et al 2021a).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…An up-and-coming field of research aimed at improving AO control methods is the application of fully data-driven control methods, where the control voltages are separately added to the learned control model (Nousiainen et al 2021;Landman et al 2020Landman et al , 2021Haffert et al 2021a,b;Pou et al 2022). A significant benefit of fully data-driven control in closed-loop is that it does not require an estimate of the system's open-loop temporal evolution and that it is, therefore, insensitive to pseudo-open-loop reconstruction errors, such as the optical gain effect (Haffert et al 2021a).…”
Section: Introductionmentioning
confidence: 99%
“…Previous work in RL-based adaptive optics control has focused on either controlling DM modes using model-free methods that learn a policy π θ : s t → a t parameterized by θ that maps states s t (or observations) into actions a t directly (Landman et al 2020(Landman et al , 2021Pou et al 2022), or using model-based methods that employ a planning step to compute actions (Nousiainen et al 2021). The model-free methods have the advantage of being fast to evaluate, as the learned policies are often neural networks that support sub millisecond inference.…”
Section: Introductionmentioning
confidence: 99%
“…Previous work in RL-based adaptive optics control has focused on either controlling DM modes using model-free methods that learn a policy π θ : s t → a t parameterized by θ that maps observations/states s t into actions a t directly (Landman et al 2020(Landman et al , 2021Pou et al 2022), or using model-based methods that employ a planning step to compute actions (Nousiainen et al 2021). The model-free methods have the advantage of being fast to evaluate, as the learned policies are often neural networks that support sub-millisecond inference.…”
Section: Introductionmentioning
confidence: 99%
“…[20][21][22] A promising area of research for mitigating these nonlinearities is the use of neural networks for learning a nonlinear mapping between wavefront sensor measurements and wavefront, [23][24][25][26] or for nonlinear control. [27][28][29][30] Furthermore, the similarities between optical systems and Neural Networks have lead to studies exploiting automatic differentiation algorithms, initially developed for training NNs, for optimizing elements in the optical system 31,32 or more efficient wavefront control. 33,34 Automatic differentiation allows us to obtain gradients with respect to the free design parameters, even for complex optical systems with multiple elements and planes.…”
Section: Introductionmentioning
confidence: 99%