Proceedings of the Web Conference 2020 2020
DOI: 10.1145/3366423.3380295
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Code Generation to Improve Code Retrieval and Summarization via Dual Learning

Abstract: Code summarization generates brief natural language description given a source code snippet, while code retrieval fetches relevant source code given a natural language query. Since both tasks aim to model the association between natural language and programming language, recent studies have combined these two tasks to improve their performance. However, researchers have yet been able to effectively leverage the intrinsic connection between the two tasks as they train these tasks in a separate or pipeline manne… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
43
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 68 publications
(43 citation statements)
references
References 27 publications
0
43
0
Order By: Relevance
“…Hu et al (2018b) introduced API knowledge from related tasks while Cai et al (2020) introduced type information to assist training, which also gained promising results. Additionally, reinforce learning (Wan et al, 2018) and dual learning Ye et al, 2020) are also shown effective to boost model performance.…”
Section: Related Workmentioning
confidence: 99%
“…Hu et al (2018b) introduced API knowledge from related tasks while Cai et al (2020) introduced type information to assist training, which also gained promising results. Additionally, reinforce learning (Wan et al, 2018) and dual learning Ye et al, 2020) are also shown effective to boost model performance.…”
Section: Related Workmentioning
confidence: 99%
“…Their proposed framework BVAE has two Variational AutoEncoders (VAEs): C-VAE for source code and L-VAE for natural language. Ye et al [44] exploited the probabilistic correlation between code comment generation task and code generation task via dual learning. Wei et al [22] also utilized the correlation between code comment generation task and code generation task and proposed a dual training framework.…”
Section: Related Workmentioning
confidence: 99%
“…Ye W. et al [60] have presented an end-to-end model named CO3 for code retrieval and code summarization. CO3 leverages code generation to bridge programming language and natural language better via dual learning and multi-task learning.…”
Section: Related Work Effectmentioning
confidence: 99%