Many practical problems need the output of a machine learning model to satisfy a set of constraints, K. There are, however, no known guarantees that classical neural networks can exactly encode constraints while simultaneously achieving universality. We provide a quantitative constrained universal approximation theorem which guarantees that for any convex or non-convex compact set K and any continuous function f : R n → K, there is a probabilistic transformer F whose randomized outputs all lie in K and whose expected output uniformly approximates f . Our second main result is a "deep neural version" of Berge (1963)'s Maximum Theorem. The result guarantees that given an objective function L, a constraint set K, and a family of soft constraint sets, there is a probabilistic transformer F that approximately minimizes L and whose outputs belong to K; moreover, F approximately satisfies the soft constraints. Our results imply the first universal approximation theorem for classical transformers with exact convex constraint satisfaction, and a chart-free universal approximation theorem for Riemannian manifold-valued functions subject to geodesically-convex constraints.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.