2022
DOI: 10.1007/s10664-022-10216-4
|View full text |Cite
|
Sign up to set email alerts
|

SPVF: security property assisted vulnerability fixing via attention-based models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…Token attention values serve as a feature type in the transformer-based model, and are helpful in identifying key tokens or contents contributing to natural language processing tasks [79]. The attention mechanism can adaptively learn the importance of even distant parts of the input code sequence for a better understating of the code's contextual information and effectively fulfil software vulnerability detection tasks [54][55][56][57][58]. Some researchers found that the attention values can serve as a proxy for the importance of tokens [53].…”
Section: Taxonomy Of Related Workmentioning
confidence: 99%
“…Token attention values serve as a feature type in the transformer-based model, and are helpful in identifying key tokens or contents contributing to natural language processing tasks [79]. The attention mechanism can adaptively learn the importance of even distant parts of the input code sequence for a better understating of the code's contextual information and effectively fulfil software vulnerability detection tasks [54][55][56][57][58]. Some researchers found that the attention values can serve as a proxy for the importance of tokens [53].…”
Section: Taxonomy Of Related Workmentioning
confidence: 99%
“…Zhou et al [185] propose a novel approach SFVP for automatically fixing vulnerabilities based on the attention-mechanism model. SPVF first extracts the security properties from descriptions of the vulnerabilities (e.g., CWE category).…”
Section: Domain Repairmentioning
confidence: 99%
“…While there has been considerable attention given to semantic bugs (Liu et al, 2021;Yuan et al, 2022;Yi Li, 2022;Nan Jiang, 2021;Lutellier et al, 2020) and vulnerability issues (Fu et al, 2022;Chen et al, 2021;Chi et al, 2022;Zhou et al, 2022) in the field of program repair, there is an increasing interest in learning-based approaches for fixing programming assignments submitted by students (Gupta et al, 2017;Ahmed et al, 2018a;Yasunaga and Liang, 2020;Orvalho et al, 2022;Wang et al, 2018;Yi et al, 2017;Gulwani et al, 2018). Unlike experienced software engineers, students are more prone to making syntactic errors in their code.…”
Section: Automated Program Repairmentioning
confidence: 99%