2022
DOI: 10.48550/arxiv.2207.11680
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence

Chaozheng Wang,
Yuanhang Yang,
Cuiyun Gao
et al.

Abstract: Pre-trained models have been shown effective in many code intelligence tasks. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in different forms, it is hard to fully explore the knowledge of pre-trained models. Besides, the performance of fine-tuning strongly relies on the amount of downstream data, while in practice, the scenarios with scarce data are common. Recent studies in the natural lang… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 44 publications
0
1
0
Order By: Relevance
“…Schuster et al [38] showed that code completion models are susceptible to poisoning attacks by adding some carefullycrafted files to the training data of a model. Some empirical studies have been conducted to empirically investigate the performance of programming language models [12,28,47,53]. For example, Zeng et al [53] suggest that developing an almighty pre-trained code model across task types is challenging, and more rigorous evaluations are required.…”
Section: Robustness Of Nmt Models For Source Codementioning
confidence: 99%
“…Schuster et al [38] showed that code completion models are susceptible to poisoning attacks by adding some carefullycrafted files to the training data of a model. Some empirical studies have been conducted to empirically investigate the performance of programming language models [12,28,47,53]. For example, Zeng et al [53] suggest that developing an almighty pre-trained code model across task types is challenging, and more rigorous evaluations are required.…”
Section: Robustness Of Nmt Models For Source Codementioning
confidence: 99%