Characterization of bugs and attack vectors is in many practical scenarios as important as their finding.
Recently, Girol et. al. have introduced the concept of robust reachability, which ensures a perfect reproducibility of the reported violations by distinguishing inputs that are under the control of the attacker (controlled inputs) from those that are not (uncontrolled inputs), and proposed first automated analysis for it.
While it is a step toward distinguishing severe bugs from benign ones, it fails for example to describe violations that are mostly reproducible,
i.e., when triggering conditions are likely to happen, meaning that they happen for all uncontrolled inputs but a few corner cases.
To address this issue, we propose to leverage theory-agnostic abduction techniques to generate constraints on the uncontrolled program inputs
that ensure that a target property is robustly satisfied.
Our proposal comes with an extension of robust reachability that is generic on the type of trace property and on the technology used to verify the properties.
We show that our approach is complete w.r.t its inference language, and we additionally discuss strategies for the efficient exploration of the inference space.
We demonstrate the feasibility of the method and its practical ability to refine the notion of robust reachability with an implementation that uses robust reachability oracles to generate constraints on standard benchmarks from software verification and security analysis.
We illustrate the use of our implementation to a vulnerability characterization problem in the context of fault injection attacks.
Our method overcomes a major limitation of the initial proposal of robust reachability,
without complicating its definition.
From a practical view, this is a step toward new verification tools that are able to characterize program violations through high-level feedback.