2021 IEEE/ACM 29th International Conference on Program Comprehension (ICPC) 2021
DOI: 10.1109/icpc52881.2021.00026
|View full text |Cite
|
Sign up to set email alerts
|

Improving Code Summarization with Block-wise Abstract Syntax Tree Splitting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(18 citation statements)
references
References 38 publications
0
18
0
Order By: Relevance
“…An important but underexplored direction is multi-lingual commonsense reasoning and generation. Studies have shown that the performance of cross-lingual PLMs is poor when evaluated on non-English commonsense reasoning benchmarks (Lin et al 2021b). These models perform poorly when evaluated on a test set that was translated to English, leading to staggering transfer reasoning capabilities to other languages and restricting the research scope to only certain languages (Ponti et al 2020).…”
Section: Discussionmentioning
confidence: 99%
“…An important but underexplored direction is multi-lingual commonsense reasoning and generation. Studies have shown that the performance of cross-lingual PLMs is poor when evaluated on non-English commonsense reasoning benchmarks (Lin et al 2021b). These models perform poorly when evaluated on a test set that was translated to English, leading to staggering transfer reasoning capabilities to other languages and restricting the research scope to only certain languages (Ponti et al 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Jiang et al [23] represent ASTs as the set of paths and then introduce node position embedding to obtain the position of the node in the AST. Besides, neural networks that take trees as input (e.g., tree-LSTMs [33,53,54], RvNNs [61] and GNNs/GCNs [9,29,59]) utilize an AST directly instead of flattening it.…”
Section: Structural Information Of Source Codementioning
confidence: 99%
“…Although a flow graph is more likely to be seen as a graph, there are few works that treat flow graphs as trees. For example, BAST [103] splits the code of a method according to the blocks in the dominator tree of CFG and generates the corresponding AST for the split code. The split ASTs' representations are used in the pre-trained stage by predicting the next split AST in the dominator tree.…”
Section: Flow-graph-based Structuresmentioning
confidence: 99%
“…Pre-embeds the interprocedural value-flow graph, considers the reachability via matrix multiplication problem and uses it to approximate the high-order proximity embedding BASTS [103] Tree-LSTM Uses Tree-LSTM and Transformer architecture to combine the representations of split AST and source code CoCoSum [164] Transformer, Multi-Relational GNN…”
Section: Clone Detectionmentioning
confidence: 99%