Probabilistic programming provides a convenient lingua franca for writing succinct and rigorous descriptions of probabilistic models and inference tasks. Several probabilistic programming languages, including Anglican, Church or Hakaru, derive their expressiveness from a powerful combination of continuous distributions, conditioning, and higher-order functions. Although very important for practical applications, these features raise fundamental challenges for program semantics and verification. Several recent works offer promising answers to these challenges, but their primary focus is on foundational semantics issues.In this paper, we take a step further by developing a suite of logics, collectively named PPV, for proving properties of programs written in an expressive probabilistic higher-order language with continuous sampling operations and primitives for conditioning distributions. Our logics mimic the comfortable reasoning style of informal proofs using carefully selected axiomatizations of key results from probability theory. The versatility of our logics is illustrated through the formal verification of several intricate examples from statistics, probabilistic inference, and machine learning. We further show expressiveness by giving sound embeddings of existing logics. In particular, we do this in a parametric way by showing how the semantics idea of (unary and relational) ⊤⊤-lifting can be internalized in our logics. The soundness of PPV follows by interpreting programs and assertions in quasi-Borel spaces (QBS), a recently proposed variant of Borel spaces with a good structure for interpreting higher order probabilistic programs.Pr z∼y [(π 1 (z)<.5∨π 2 (z)>.5)∧(π 1 (z)>.5)]Pr z∼y [π 1 (z)<.5∨π 2 (z)>.5] and this can be proved by applying the [Bayes] rule above, concluding the proof. We saw different components of PPVat work here: unary rules, subtyping, and a special rule for query. All these components can be assembled in more complex examples, as we show in Section 8.Monte Carlo Approximation. As a second example, we show how to use PPV to reason about other classical applications that do not use observations. We consider reasoning about expected value and variance of distributions. Concretely, we show convergence in probability of an implementation of the naive Monte Carlo approximation. This algorithm considers a distribution d and a real-valued function h, and tries to approximate the expected value of h(x) where x is sampled from d by sampling a number i of values from d and computing their mean.Consider the following implementation of Monte Carlo approximation:Our goal is to prove the convergence in probability of this algorithm, that is, the result can be made as accurate as desired by increasing the sample size (denoted by i above and n below). This is 1 We introduce the rule here to give some intuition, but this is also discussed in Section 6 after introducing PPV.