2020 IEEE European Symposium on Security and Privacy (EuroS&P) 2020
DOI: 10.1109/eurosp48549.2020.00018
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Explanation Methods for Deep Learning in Security

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
125
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 78 publications
(127 citation statements)
references
References 23 publications
2
125
0
Order By: Relevance
“…An essential precondition is having fine-grained analysis results, e.g., highlighting problematic code tokens (Russell et al ,[S2]), rather than declaring an entire code sample vulnerable. For example, layer relevance propagation, an explanation technique propagating the prediction of neural networks layer-wise back to its inputs, could be utilized to report which tokens influenced the current decision of a model (Warnecke et al 2020). Such methods would allow highlighting the most problematic code locations to a user and to guide further inspection and should be explored for vulnerability analysis (Zou et al (2019) [S6], [S12]).…”
Section: Explainability Of Analysis Resultsmentioning
confidence: 99%
“…An essential precondition is having fine-grained analysis results, e.g., highlighting problematic code tokens (Russell et al ,[S2]), rather than declaring an entire code sample vulnerable. For example, layer relevance propagation, an explanation technique propagating the prediction of neural networks layer-wise back to its inputs, could be utilized to report which tokens influenced the current decision of a model (Warnecke et al 2020). Such methods would allow highlighting the most problematic code locations to a user and to guide further inspection and should be explored for vulnerability analysis (Zou et al (2019) [S6], [S12]).…”
Section: Explainability Of Analysis Resultsmentioning
confidence: 99%
“…Fan et al [23] assess the quality of five interpretation techniques in Android malware analysis applications, where they evaluate the stability, robustness, and effectiveness of the interpretations. Warnecke et al [81] also study the similar dimensions of interpretation in security domain. They both find that different interpretation techniques could generate different interpretation results for the same prediction result.…”
Section: Reproducibility Of Machine Learningmentioning
confidence: 94%
“…We use a linear Support Vector Machine (SVM) with bagof-words features based on n-grams as a baseline for VulDeePecker (see Appendix C for details). To see what VulDeePecker has learned we follow the work of Warnecke et al [129] and use the Layerwise Relevance Propagation (LRP) method [12] to explain the predictions and assign each token a relevance score that indicates its importance for the classification. Results.…”
Section: Vulnerability Discoverymentioning
confidence: 99%