“…We discuss the relevant literature in these domains to give context to our novelty. ACOPF: From an optimization perspective, a large volume of works seek to design provably convergent algorithms for ACOPF [7,12,21,23,25] and to numerically accelerate classic nonlinear optimization solver through massively parallelized computation [3,11,13,18,29]. In the learning regime, two main lines of work include 1) learning an end-to-end mapping from input of the ACOPF problem to the output [9,16,17,20] and 2) learning parameters and/or sub-steps within an optimization solver [4,19,27,28].…”