2022
DOI: 10.48550/arxiv.2207.10397
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CodeT: Code Generation with Generated Tests

Abstract: Given a programming problem, pre-trained language models such as Codex have demonstrated the ability to generate multiple different code solutions via sampling. However, selecting a correct or best solution from those samples still remains a challenge. While an easy way to verify the correctness of a code solution is through executing test cases, producing high-quality test cases is prohibitively expensive. In this paper, we explore the use of pre-trained language models to automatically generate test cases, c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 19 publications
0
13
0
Order By: Relevance
“…However, in reality, human-specified test cases are not always available. Recently, Chen et al (2022) observe that Transformer pre-trained on code generation can also generate useful test cases by adding an assert keyword at the end of the prompt. We follow the prompt design in Chen et al (2022) to automatically generate test cases and run our PG-TD algorithm using the automatically-generated test cases.…”
Section: Effectiveness Of Cachingmentioning
confidence: 99%
See 3 more Smart Citations
“…However, in reality, human-specified test cases are not always available. Recently, Chen et al (2022) observe that Transformer pre-trained on code generation can also generate useful test cases by adding an assert keyword at the end of the prompt. We follow the prompt design in Chen et al (2022) to automatically generate test cases and run our PG-TD algorithm using the automatically-generated test cases.…”
Section: Effectiveness Of Cachingmentioning
confidence: 99%
“…Recently, Chen et al (2022) observe that Transformer pre-trained on code generation can also generate useful test cases by adding an assert keyword at the end of the prompt. We follow the prompt design in Chen et al (2022) to automatically generate test cases and run our PG-TD algorithm using the automatically-generated test cases. Empirically, we confirm that compared with beam search, PG-TD still has a higher strict accuracy by using automaticallygenerated test cases to verify the generated programs.…”
Section: Effectiveness Of Cachingmentioning
confidence: 99%
See 2 more Smart Citations
“…In-context learning is a novel paradigm that conditions the model on task descriptions and demonstrations to generate answers for the same tasks [25]. It has been applied to various domains, including testing [55], code generation [56], and GUI automation [57]. These works use coarse-grained, direct-inquiry style prompt design.…”
Section: Related Workmentioning
confidence: 99%