We present VERIFAI, a software toolkit for the formal design and analysis of systems that include artificial intelligence (AI) and machine learning (ML) components. VERIFAI particularly seeks to address challenges with applying formal methods to perception and ML components, including those based on neural networks, and to model and analyze system behavior in the presence of environment uncertainty. We describe the initial version of VERIFAI which centers on simulation guided by formal models and specifications. Several use cases are illustrated with examples, including temporal-logic falsification, model-based systematic fuzz testing, parameter synthesis, counterexample analysis, and data set augmentation.
We present a technique for learning control Lyapunov-like functions, which are used in turn to synthesize controllers for nonlinear dynamical systems that can stabilize the system, or satisfy specifications such as remaining inside a safe set, or eventually reaching a target set while remaining inside a safe set. The learning framework uses a demonstrator that implements a black-box, untrusted strategy presumed to solve the problem of interest, a learner that poses finitely many queries to the demonstrator to infer a candidate function, and a verifier that checks whether the current candidate is a valid control Lyapunov function. The overall learning framework is iterative, eliminating a set of candidates on each iteration using the counterexamples discovered by the verifier and the demonstrations over these counterexamples. We prove its convergence using ellipsoidal approximation techniques from convex optimization. We also implement this scheme using nonlinear MPC controllers to serve as demonstrators for a set of state and trajectory stabilization problems for nonlinear dynamical systems. We show how the verifier can be constructed efficiently using convex relaxations of the verification problem for polynomial systems to semi-definite programming (SDP) problem instances. Our approach is able to synthesize relatively simple polynomial control Lyapunov functions, and in that process replace the MPC using a guaranteed and computationally less expensive controller.
Abstract-We investigate the problem of synthesizing switching controllers for stabilizing continuous-time plants. First, we introduce a class of control Lyapunov functions (CLFs) for switched systems along with a switching strategy that yields a closed loop system with a guaranteed minimum dwell time in each switching mode. However, the challenge lies in automatically synthesizing appropriate CLFs. Assuming a given fixed form for the CLF with unknown coefficients, we derive quantified nonlinear constraints whose feasible solutions (if any) correspond to CLFs for the original system. However, solving quantified nonlinear constraints pose a challenge to most LMI/BMI-based relaxations. Therefore, we investigate a general approach called Counter-Example Guided Inductive Synthesis (CEGIS), that has been widely used in the emerging area of automatic program synthesis. We show how a LMI-based relaxation can be formulated within the CEGIS framework for synthesizing CLFs. We also evaluate our approach on a number of interesting benchmarks, and compare the performance of the new approach with our previous work that uses off-the-shelf nonlinear constraint solvers instead of the LMI relaxation. The results shows synthesizing CLFs by using LMI solvers inside a CEGIS framework can be a computational feasible approach to synthesizing CLFs.
We investigate the problem of synthesizing robust controllers that ensure that the closed loop satisfies an input reach-while-stay specification, wherein all trajectories starting from some initial set I, eventually reach a specified goal set G, while staying inside a safe set S. Our plant model consists of a continuous-time switched system controlled by an external switching signal and plant disturbance inputs. The controller uses a state feedback law to control the switching signal in order to ensure that the desired correctness properties hold, regardless of the disturbance actions. Our approach uses a proof certificate in the form of a robust control Lyapunov-like function (RCLF) whose existence guarantees the reach-while-stay specification. A counterexample guided inductive synthesis (CEGIS) framework is used to find a RCLF by solving a ∃∀∃∀ formula iteratively using quantifier free SMT solvers. We compare our synthesis scheme against a common approach that fixes disturbances to nominal values and synthesizes the controller, ignoring the disturbance. We demonstrate that the latter approach fails to yield a robust controller over some benchmark examples, whereas our approach does. Finally, we consider the problem of translating the RCLF synthesized by our approach into a control implementation. We outline the series of offline and real-time computation steps needed. The synthesized controller is implemented and simulated using the Matlab(tm)/Simulink(tm) model-based design framework, and illustrated on some examples.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.