2020
DOI: 10.1145/3428295
|View full text |Cite
|
Sign up to set email alerts
|

Just-in-time learning for bottom-up enumerative synthesis

Abstract: A key challenge in program synthesis is the astronomical size of the search space the synthesizer has to explore. In response to this challenge, recent work proposed to guide synthesis using learned probabilistic models. Obtaining such a model, however, might be infeasible for a problem domain where no high-quality training data is available. In this work we introduce an alternative approach to guided program synthesis: instead of training a model ahead of time we show how to bootstrap one just in time, during… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 31 publications
(25 citation statements)
references
References 52 publications
0
8
0
Order By: Relevance
“…The To provide fair comparison across all methods, each test split is run using a single core and single CPU thread with a timeout of 5s. To account for variability across machines, we chose to run a test split on a machine chosen randomly from a collection of 7 machines of similar configuration (Google Cloud instances with 120GB RAM each) 4 . We report standard error across the 30 test runs.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The To provide fair comparison across all methods, each test split is run using a single core and single CPU thread with a timeout of 5s. To account for variability across machines, we chose to run a test split on a machine chosen randomly from a collection of 7 machines of similar configuration (Google Cloud instances with 120GB RAM each) 4 . We report standard error across the 30 test runs.…”
Section: Resultsmentioning
confidence: 99%
“…STUN [1] extends the CEGIS [28] approach by providing domain-specific explicit unification operators for combining partial solutions. Recently, BESTER [24] and later PROBE [4] perform bottom-up enumeration of programs in a loop by enumerating all programs that satisfy IO examples partially. This is followed by heuristics-based selection of promising programs.…”
Section: Related Workmentioning
confidence: 99%
“…CONCORD uses deduction to constrain the search space and to guide the search, synthesising problems over lists using a functional programming language. Probe 29 (2020) is a synthesizer that trains a probabilistic model by learning from partial solutions encountered along the way, instead of training ahead of time. The authors also implement a new program enumeration algorithm, which extends the efficient bottom‐up search approach.…”
Section: Related Workmentioning
confidence: 99%
“…The authors also implement a new program enumeration algorithm, which extends the efficient bottom-up search approach. Probe has been evaluated on 140 SyGuS benchmarks problems of string manipulation, bit-vector manipulation, and circuit transformation, outperforming state-of-art contemporary SyGuS synthesizers (such as EuPhony 18 and CVC4 19 ) in problems where larger train datasets are not available; 20 TerpreT-a model composed of a specification of a program representation (declarations of random variables) and an interpreter that describes how programs map inputs to outputs (a model connecting unknowns to observations); 21 and DeepCoder-a program synthesis technique from input-output examples guided by a neural network model. 22 Mandal et al 23 proposed a novel approach (named NetSyn) to learn the fitness function using neural networks for better guiding the process of searching for a solution.…”
Section: Related Workmentioning
confidence: 99%