2019
DOI: 10.1145/3338112
|View full text |Cite
|
Sign up to set email alerts
|

Scaling static analyses at Facebook

Abstract: key insights ˽ Advanced static analysis techniques performing deep reasoning about source code can scale to large industrial codebases, for example, with 100-million LOC. ˽ Static analyses should strike a balance between missed bugs (false negatives) and un-actioned reports (false positives). ˽ A "diff time" deployment, where issues are given to developers promptly as part of code review, is important to catching bugs early and getting high fix rates.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
98
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 157 publications
(101 citation statements)
references
References 19 publications
3
98
0
Order By: Relevance
“…He characterizes the use of other, more heavyweight, formal techniques (e.g., bounded model checking) in similar continuous settings as an open scientific challenge. O'Hearn also highlights that reporting static analysis results during code review is crucial to achieving a higher fix rate [14], which is reaffirmed by our work (cf. Sec.…”
Section: Related Worksupporting
confidence: 85%
See 1 more Smart Citation
“…He characterizes the use of other, more heavyweight, formal techniques (e.g., bounded model checking) in similar continuous settings as an open scientific challenge. O'Hearn also highlights that reporting static analysis results during code review is crucial to achieving a higher fix rate [14], which is reaffirmed by our work (cf. Sec.…”
Section: Related Worksupporting
confidence: 85%
“…Over the last 16 years, automated reasoning techniques (e.g., model checking) have evolved significantly [10,15]. As a consequence, several software verification frameworks have emerged in the literature [5]; with many applications in the industrial setting [7,12,14,27]. Some papers [8,20,23] describe how human factors impact the adoption and integration of formal verification techniques into well-established software engineering process.…”
Section: Related Workmentioning
confidence: 99%
“…This could be a result of a flattening trend of ASAT warning removal after some time which means the LLoC then becomes the dominating factor of the warning density equation. This can be seen as evidence of industry best practices like utilizing static analysis tools only on new code as reported by Google (Sadowski et al 2018) and Facebook (Distefano et al 2019). Further evidence of this best practice is shown in the results of short and long term impact of PMD.…”
Section: Discussionmentioning
confidence: 68%
“…Published industry reports share some findings regarding static analysis infrastructure and warning removal. Google (Sadowski et al 2018) and Facebook (Distefano et al 2019) both found that just presenting developers with a large list of findings rarely motivates them to fix all reported warnings. However, reporting the warnings as early as possible, or at latest at code review time, improves the adoption and subsequent removal of static analysis warnings.…”
Section: Introductionmentioning
confidence: 99%
“…We introduce HackPPL, a probabilistic programming language that aims to bridge the gap between these paradigms with first-class integrations with language and developer productivity tools. HackPPL is built within the Hack programming language [33], a dominant web development language across large technology firms with over 100 million lines of production code [10]. Although Hack originated as an optionally-typed dialect of PHP, the Hack development team is discontinuing PHP support in order to open opportunities for sweeping language advancements [32].…”
Section: Introductionmentioning
confidence: 99%