Morgan and McIver’s weakest pre-expectation framework is one of the most well-established methods for deductive verification of probabilistic programs. Roughly, the idea is to generalize binary state assertions to real-valued expectations, which can measure expected values of probabilistic program quantities. While loop-free programs can be analyzed by mechanically transforming expectations, verifying loops usually requires finding an invariant expectation, a difficult task.We propose a new view of invariant expectation synthesis as a regression problem: given an input state, predict the average value of the post-expectation in the output distribution. Guided by this perspective, we develop the first data-driven invariant synthesis method for probabilistic programs. Unlike prior work on probabilistic invariant inference, our approach can learn piecewise continuous invariants without relying on template expectations. We also develop a data-driven approach to learn sub-invariants from data, which can be used to upper- or lower-bound expected values. We implement our approaches and demonstrate their effectiveness on a variety of benchmarks from the probabilistic programming literature.
Morgan and McIver’s weakest pre-expectation framework is one of the most well-established methods for deductive verification of probabilistic programs. Roughly,the idea is to generalize binary state assertions to real-valued expectations, whichcan measure expected values of probabilistic program quantities. While loop-freeprograms can be analyzed by mechanically transforming expectations, verifyingloops usually requires finding an invariant expectation, a difficult task.We propose a new view of invariant expectation synthesis as a regression prob-lem: given an input state, predict the average value of the post-expectation inthe output distribution. Guided by this perspective, we develop the first data-driven invariant synthesis method for probabilistic programs. Unlike prior workon probabilistic invariant inference, our approach can learn piecewise continuousinvariants without relying on template expectations. We also develop a data-driven approach to learn sub-invariants from data, which can be used to upper-or lower-bound expected values. We implement our approaches and demonstratetheir effectiveness on a variety of benchmarks from the probabilistic programmingliterature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.