Abstract. This paper presents four extensions to the Interprocedural Finite Distributive Subset (IFDS) algorithm that make it applicable to a wider class of analysis problems. IFDS is a dynamic programming algorithm that implements context-sensitive flow-sensitive interprocedural dataflow analysis. The first extension constructs the nodes of the supergraph on demand as the analysis requires them, eliminating the need to build a full supergraph before the analysis. The second extension provides the procedure-return flow function with additional information about the program state before the procedure was called. The third extension improves the precision with which φ instructions are modelled when analyzing a program in SSA form. The fourth extension speeds up the algorithm on domains in which some of the dataflow facts subsume each other. These extensions are often necessary when applying the IFDS algorithm to non-separable (i.e. non-bit-vector) problems. We have found them necessary for alias set analysis and multi-object typestate analysis. In this paper, we illustrate and evaluate the extensions on a simpler problem, a variation of variable type analysis.
In the verification community it is now widely accepted that, in particular for large programs, verification is often incomplete and hence bugs still arise in deployed code on the machines of end users. Yet, in most cases, verification code is taken out prior to deployment due to large performance penalties induced by current runtime verification approaches. Consequently, if errors do arise in a production environment, bugs are hard to find, since the available debugging information is often very limited.In previous work on tracematches [1], we have shown that in many cases runtime monitoring can be made much more efficient using static analysis of the specification [2] and program under test [3]. Most often, the imposed runtime overhead can be reduced to under 10%. However, the evaluation we performed also showed that some classes of specifications and programs exist for which those optimizations do not perform as well and hence large overheads remain. According to researchers in industry [5], larger industrial companies would likely be willing to accept runtime verification in deployed code if the overhead is below 5%. Hence, additional work is required in order to make runtime verification scale even better.In this work, we tackle this problem by applying methods of remote sampling [4] to runtime verification. Remote sampling makes use of the fact that companies which produce large pieces of software (which are usually hard to analyze) often have access to a large user base. Hence, instead of generating a program that is instrumented with runtime verification checks at all necessary places, one can generate different kinds of partial instrumentation ("probes") for each such user. A centralized server then combines results of all runs of those users. This method is generally very flexible. In particular, we see the following advantages over a complete runtime verification.Less runtime overhead per user. The program each user runs is only partially instrumented and hence the instrumentation overhead can be kept to a moderate level. Better coverage of relevant paths. In order for runtime verification to be complete, perfect path coverage is necessary. In general, this is nearly impossible to achieve. If instrumentation could be dynamically adapted, it could be focused on paths that are actually being executed during users' program runs. Assigning priorities. Similarly, usage data could be used to assign priorities to bugs that are triggered by many users. Automatic analyses. The server that receives the event data in the end can apply arbitrarily sophisticated analyses on the received data and automatically attach this information to a bug report. This is in contrast to existing error reporting systems, which are mostly operated manually.
This paper presents a static analysis of typestate-like temporal specifications of groups of interacting objects, which are expressed using tracematches. Whereas typestate expresses a temporal specification of one object, a tracematch state may change due to operations on any of a set of related objects bound by the tracematch. The paper proposes a lattice-based operational semantics equivalent to the original tracematch semantics but better suited to static analysis. The paper defines a static analysis that computes precise local points-to sets and tracks the flow of individual objects, thereby enabling strong updates of the tracematch state. The analysis has been proved sound with respect to the semantics. A context-sensitive version of the analysis has been implemented as instances of the IFDS and IDE algorithms. The analysis was evaluated on tracematches used in earlier work and found to be very precise. Remaining imprecisions could be eliminated with more precise modeling of references from the heap and of exceptional control flow.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.