2020 IEEE Intl Conf on Parallel &Amp; Distributed Processing With Applications, Big Data &Amp; Cloud Computing, Sustainable Com 2020
DOI: 10.1109/ispa-bdcloud-socialcom-sustaincom51426.2020.00068
|View full text |Cite
|
Sign up to set email alerts
|

AVDHRAM: Automated Vulnerability Detection based on Hierarchical Representation and Attention Mechanism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…is method achieves a lower false-negative rate than other vulnerability detection methods; applying VulDeePecker to three software products (Xen, Seamonkey, and Libav), four never-reported vulnerabilities were detected. An et al proposed an automatic vulnerability detection framework (HAN) [21] based on hierarchical representation and attention mechanism, in which the framework uses five different granularity code slices according to the semantics of the source code during source code representation instead of functions, files, or…”
Section: Pattern-basedmentioning
confidence: 99%
“…is method achieves a lower false-negative rate than other vulnerability detection methods; applying VulDeePecker to three software products (Xen, Seamonkey, and Libav), four never-reported vulnerabilities were detected. An et al proposed an automatic vulnerability detection framework (HAN) [21] based on hierarchical representation and attention mechanism, in which the framework uses five different granularity code slices according to the semantics of the source code during source code representation instead of functions, files, or…”
Section: Pattern-basedmentioning
confidence: 99%
“…To optimally represent the characteristics of a node in the entire code attribute graph, it is necessary to map the information of its neighbor nodes to itself. Considering that each neighbor node has different weights for its influence, we use a Graph Attention Network with a multi-head attention mechanism; the aggregation effect of the attention mechanism on node information has been demonstrated in many studies [27], [34], [45], [46], [47], [48]. Besides, the attention mechanism can deal with latent noise features.…”
Section: Graph Feature Extractionmentioning
confidence: 99%