“…Recently, [2,6] showed how to efficiently differentiate through convex cone programs by applying the implicit function theorem to a residual map introduced in [27], and [1] showed how to differentiate through convex optimization problems by an automatable reduction to convex cone programs; our method for learning convex optimization models builds on this recent work. Optimization layers have been used in many applications, including control [7,11,15,3], game-playing [46,45], computer graphics [37], combinatorial tasks [58,52,53,21], automatic repair of optimization problems [14], and data fitting more generally [9,17,16,10]. Differentiable optimization for nonconvex problems is often performed numerically by differentiating each individual step of a numerical solver [33,48,32,36], although sometimes it is done implicitly; see, e.g., [7,47,4].…”