2017 IEEE/ACM 4th International Workshop on Software Engineering Research and Industrial Practice (SER&IP) 2017
DOI: 10.1109/ser-ip.2017..20
|View full text |Cite
|
Sign up to set email alerts
|

Identifying and Documenting False Positive Patterns Generated by Static Code Analysis Tools

Abstract: Head of the Graduate Program iiiTo my parents, brothers, and sister.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 18 publications
0
16
0
Order By: Relevance
“…Source code is commonly parsed and transformed into graph representation to perform pattern recognition [49], [50], [55], [57], [115], [116]. It is common to use source code as the input for quality assurance tools [55], [116]. Besides it is a common analysis used for code clone detection [49], [54], [117].…”
Section: ) Source Code Analysismentioning
confidence: 99%
“…Source code is commonly parsed and transformed into graph representation to perform pattern recognition [49], [50], [55], [57], [115], [116]. It is common to use source code as the input for quality assurance tools [55], [116]. Besides it is a common analysis used for code clone detection [49], [54], [117].…”
Section: ) Source Code Analysismentioning
confidence: 99%
“…Ruthruff et al (2008) proposed a logistic regression model based on 33 features extracted from the alarms themselves to predict actionable alarms found by FindBugs, and a screening methodology was used to quickly discard features with low predictive power in order to build cost-effectively predictive models. Reynolds et al (2017) used a set of descriptive attributes to standardize the patterns of false positives. Several studies (Brun and Ernst 2004;Yi et al 2007;Heckman and Williams 2009;Liang et al 2010;Yuksel and Sözer 2013;Hanam et al 2014;Yoon et al 2014;Flynn et al 2018) have utilized machine learning classification models to abstract the difference between the actionable alarms and the unactionable alarms for automatically identifying defects.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, we evaluate which metrics are highly correlated with each type of warning generated by the SCA tool, while in their work the authors evaluated which source code structures force the SCA tools to generate false positive warnings. Reynolds et al [31] identified and documented 14 of different kinds of false positive patterns, by running three of SCA tools against C/C++ Juliet test suite. Then the authors reduced the source code manually in order to remove the unrelated instructions.…”
Section: Related Workmentioning
confidence: 99%