2016
DOI: 10.1016/j.cam.2015.07.033
|View full text |Cite
|
Sign up to set email alerts
|

Simultaneous single-step one-shot optimization with unsteady PDEs

Abstract: The single-step one-shot method has proven to be very efficient for PDEconstrained optimization where the partial differential equation (PDE) is solved by an iterative fixed point solver. In this approach, the simulation and optimization tasks are performed simultaneously in a single iteration. If the PDE is unsteady, finding an appropriate fixed point iteration is nontrivial. In this paper, we provide a framework that makes the single-step one-shot method applicable for unsteady PDEs that are solved by classi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 23 publications
0
8
0
Order By: Relevance
“…Through this method they show better sensitivity behavior and successful optimizations; they also applied this method to error estimation and adaptive mesh refinement [17]. This method inherits some of the characteristics shown in the work on one-shot optimzation methods [5] including the guaranteed convergence of the tangent and adjoint problems, due to their similarity to the black box approach [21]. Padway and Mavriplis [16] showed by numerical experiment that for an approximately linearized quasi-Newton fixed point iteration, convergence of the non-linear problem led to a decrease in error from the approximate linearization.…”
Section: Introductionmentioning
confidence: 91%
“…Through this method they show better sensitivity behavior and successful optimizations; they also applied this method to error estimation and adaptive mesh refinement [17]. This method inherits some of the characteristics shown in the work on one-shot optimzation methods [5] including the guaranteed convergence of the tangent and adjoint problems, due to their similarity to the black box approach [21]. Padway and Mavriplis [16] showed by numerical experiment that for an approximately linearized quasi-Newton fixed point iteration, convergence of the non-linear problem led to a decrease in error from the approximate linearization.…”
Section: Introductionmentioning
confidence: 91%
“…Since we focused on preconditioning in‐domain Navier–Stokes control problem, we used a laminar flow model in IFISS to study the performance of the MSSS preconditioning technique. The next step to extend this research shall focus on applying the turbulent flow model for the real‐world wind farm control applications, while some recent development in optimal control of unsteady Reynolds‐averaged Navier–Stokes equations (RANS) 74 is a good starting point to extend our preconditioning technique.…”
Section: Conclusion and Remarksmentioning
confidence: 99%
“…In brief, the one-shot method consists of simultaneously timestepping an underlying integrator, an adjoint solver, and a parameter update process. The use of approximate gradients and warm-starting lead to much faster algorithms [13,11] but these features also complicate the convergence analysis of optimization algorithms. In order to prove convergence, the relative time-scales of the auxiliary system (state and adjoint integration) and the parameter process (approximate gradient descent) needs to be considered in analyzing these schemes.…”
Section: Related Workmentioning
confidence: 99%
“…In order to prove convergence, the relative time-scales of the auxiliary system (state and adjoint integration) and the parameter process (approximate gradient descent) needs to be considered in analyzing these schemes. Several authors have described practical implementations of one-shot methods with dynamic time-scaling [11,16]. A convergence proof for a class of one-shot methods using adaptive time-scaling was given in [12].…”
Section: Related Workmentioning
confidence: 99%