wstractRelevant context inference (RCI) is a modular technique for flow-and context-sensitive data-flow analysis of statically typed object-oriented programming languages such as C++ and Java. RCI can be used to analyze complete programs as well as incomplete programs such as libraries; this approach does not require that the entire program be memoryresident during the analysis. RCIis presented in the context of points-to analysis for a realistic subset of Cts. The empirical evidence obtained from a prototype implementation argues the effectiveness of RCI.
Static analysis of programs is indispensable to any software tool, environment, or system that requires compile-time information about the semantics of programs. With the emergence of languages like C and LISP, static analysis of programs with dynamic storage and recursive data structures has become a field of active research. Such analysis is difficult, and the static-analysis community has recognized the need for simplifying assumptions and approximate solutions. However, even under the common simplifying assumptions, such analyses are harder than previously recognized. Two fundamental static-analysis problems are may alias and must alias . The former is not recursive (is undecidable), and the latter is not recursively enumerable (is uncomputable), even when all paths are executable in the program being analyzed for languages with if statements, loops, dynamic storage, and recursive data structures.
A?iasing occurs at some program point during execution when two or more names exist for the same location. We have isolated various programming language mechanisms which create aliases. We have classified the complexity of the fllas problem induced by each mechanism alone and in combination, as AfP-hard, complement tip-hard, or polynomial ('P). We present our problem classification, give an overview of our proof that finding interprocedural aliases in the presence of single level pointers is in 7, and present a represent tive proof for the NP-hard problems.
HISTORYTrying to look back over a ten-plus year period and to remember what influenced us at the time, is difficult and error prone. Thus, it is probable that these reflections are incomplete. Nevertheless, it is important to our field to trace the historical influences on new ideas.Our interest in aliasing was a natural continuation of work done at Rutgers in incremental dataflow analysis in the 1980's. 1 Much of this work focused on FORTRAN, but as time went on C became a very important language and its analysis was essential for compiler optimization and the development of software tools. The most significant difference between C and FORTRAN from a static analysis perspective was the ubiquity of pointer aliasing in C. Because of this ubiquity, having a viable solution to the Pointer May-alias problem was fundamental to any whole-program static analysis for general C programs. At the time, there was no work yielding precise program point specific aliases for entire C programs, so the problem was unsolved, challenging and necessary.We started, perhaps surprisingly, from computability theory. It was obvious that a precise computation would not be possible. However, we thought that it was important to understand which aspects of the problem could be solved precisely and which had to be approximated. By "could be solved" we meant there exists a polynomial algorithm. We felt this approach would give us an appreciation of a complex problem that could be the basis for its viable solution. This resulted in our paper Pointer-induced Aliasing: A Problem Classification [22] in which we demonstrated a precise polynomial algorithm for the aliases of single-level pointers.Our subsequent approach was to design an algorithm that was precise for single-level pointers, but approximate in the general case. In retrospect, this decision still seems appropriate. While the worst case time complexities of our algorithms were polynomial, the order of the polynomial was large enough that their scalability to very large programs was questionable. However, it was clear that this worst case behavior was not likely to be encountered in practice. Trying to perform any sort of average case analysis was not possible, because the concept of an average program was ill-defined. Thus, we decided it was important to validate our approach through empirical performance measurements. This approach, while not invented by us, has since our paper become a 1 http://prolangs.rutgers.edu 20 .$5.00. requirement for subsequent analyses. Our original theoretical slant also caused us to look into sources of imprecision in our algorithm, still an important theme on our program analysis work.Finally, we built upon a large body of work in dataflow analysis and abstract interpretation; some of the most influential were [2, 3, 6, 21, 30, 28, 38]. SINCE PLDI 1992In our group (PROLANGS). Our PLDI 1992 paper [23] explored the concepts of flow and context sensitivity, two important dimensions in static analysis algorithm design. Intuitively, flow sensitivity refers to w...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.