2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) 2022
DOI: 10.1109/saner53432.2022.00013
|View full text |Cite
|
Sign up to set email alerts
|

Source Code Summarization with Structural Relative Position Guided Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…SCRIPT [ 11 ] introducing the structural relative positions between nodes of the AST to better capture the structural relative dependencies.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…SCRIPT [ 11 ] introducing the structural relative positions between nodes of the AST to better capture the structural relative dependencies.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Ahmad et al [ 8 ] first proposed a transformer-based method on the code summarization task, which achieved excellent performance and leads the code summarization area into the transformer-based model stage. Because of the popularization and performance of transformers, almost all recent works [ 9 , 10 , 11 , 12 ] are conducted based on the transformer architecture and achieve high scores in each evaluation metric. However, only considering sequence information without considering the structure of code leads to a incomplete representation of code.…”
Section: Introductionmentioning
confidence: 99%
“…LLMs (e.g., GPT-3 [26], CodeX [14], ChatGPT [42]), have shown significant improvements in software engineering tasks, such as requirements classification [43], [44], [45], FQN inference [46], [47], and code summarization [48], [49], [50]. They can capture code's structural knowledge (e.g., AST [51], [52]) and semantic knowledge (e.g., code weakness [53], [54] and API relation [23]).…”
Section: Related Workmentioning
confidence: 99%
“…It is well known that incorporating structural inductive priors -which is usually implemented via various additive relative masking mechanisms in regular attention architectures -is difficult for Performers. We refer to these methods as Relative Positional Encodings (or RPEs) (Shaw et al, 2018;Raffel et al, 2020;Wu et al, 2021;Gong et al, 2022;Luo et al, 2022b). RPEs play a critical role in improving the performance of Transformers in long-range modeling, e.g.…”
Section: Introductionmentioning
confidence: 99%