2023
DOI: 10.1007/s10664-023-10384-x
|View full text |Cite
|
Sign up to set email alerts
|

EnCoSum: enhanced semantic features for multi-scale multi-modal source code summarization

Yuexiu Gao,
Hongyu Zhang,
Chen Lyu
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 50 publications
0
1
0
Order By: Relevance
“…• CodeT5 [16] is a pre-trained encoder-decoder model based on T5 [32] for programming and natural language, directly tuned with summarization datasets. • CodeBERT [10] is a pre-trained encoder model for programming and natural language based on Roberta [33].…”
Section: B Baselinementioning
confidence: 99%
“…• CodeT5 [16] is a pre-trained encoder-decoder model based on T5 [32] for programming and natural language, directly tuned with summarization datasets. • CodeBERT [10] is a pre-trained encoder model for programming and natural language based on Roberta [33].…”
Section: B Baselinementioning
confidence: 99%