2021
DOI: 10.48550/arxiv.2105.08645
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CoTexT: Multi-task Learning with Code-Text Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(17 citation statements)
references
References 11 publications
1
16
0
Order By: Relevance
“…• RQ3: How effective is CDCS applied on other pre-trained programming language models? Besides CodeBERT, there are other pre-trained models that also achieve outstanding results in software engineering tasks [1,22,24]. We wonder whether other pre-trained models can have the same effectiveness on code search when equipped with meta learning.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…• RQ3: How effective is CDCS applied on other pre-trained programming language models? Besides CodeBERT, there are other pre-trained models that also achieve outstanding results in software engineering tasks [1,22,24]. We wonder whether other pre-trained models can have the same effectiveness on code search when equipped with meta learning.…”
Section: Methodsmentioning
confidence: 99%
“…In recent years, pre-trained language models for source code have received much attention [1,10,22,24]. CodeBERT [10], built on top of the popular model of BERT [8], is one of the earliest attempts that adapt pre-trained models for programming languages.…”
Section: Pre-trained Language Models For Codementioning
confidence: 99%
See 1 more Smart Citation
“…More recent approaches to code documentation utilize pretrained transformers. Currently, CoTexT (Phan et al, 2021) outperforms CodeBERT , PLBART , and ProphetNET-X (Qi et al, 2021) in this task. The CodeSearchNet corpus is further described in §3.4 and the models are described later on in the code generation subsection §3.8.…”
Section: Code Documentation Generation and Summarizationmentioning
confidence: 99%
“…Other pretrained transformers used on source code include CodeT5 (Wang et al, 2021b). Code-Trans (Elnaggar et al, 2021), PyMT5 (Clement et al, 2020), CuBERT (Kanade et al, 2020), PLBART , ProphetNet-X (Qi et al, 2021), CoTexT (Phan et al, 2021), T5-Code (Mastropaolo et al, 2021), GraphCode-BERT , and AlphaCode (Li et al, 2022). Pretrained GPT-style Models for source code generation include CodeGPT , and GPT-Codex (Chen et al, 2021a).…”
Section: Pretrained Transformer Modelsmentioning
confidence: 99%