2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC) 2020
DOI: 10.1109/asp-dac47756.2020.9045559
|View full text |Cite
|
Sign up to set email alerts
|

DRiLLS: Deep Reinforcement Learning for Logic Synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
61
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 77 publications
(61 citation statements)
references
References 9 publications
0
61
0
Order By: Relevance
“…The authors could generate the best designs for three large scale circuits, beating the state-of-the-art logic synthesis tools. In [16], a deep reinforcement learning approach for exact logic synthesis is presented. The authors have used A2C reinforcement learning algorithm to determine the order of applying optimization commands (among a few candidate commands) to a given circuit for achieving better QoR.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors could generate the best designs for three large scale circuits, beating the state-of-the-art logic synthesis tools. In [16], a deep reinforcement learning approach for exact logic synthesis is presented. The authors have used A2C reinforcement learning algorithm to determine the order of applying optimization commands (among a few candidate commands) to a given circuit for achieving better QoR.…”
Section: Related Workmentioning
confidence: 99%
“…The authors have used A2C reinforcement learning algorithm to determine the order of applying optimization commands (among a few candidate commands) to a given circuit for achieving better QoR. Similar to [15], in [16], the goal is to remove the human guidance and expertise from the process of logic synthesis. In this paper, we present Deep-PowerX, which provides low-power and area-efficient approximate logic solutions while benefiting from state-of-the-art deep learning algorithms to offer significant improvements on QoR (power, area, delay, and runtime).…”
Section: Related Workmentioning
confidence: 99%
“…Last but not least, while initialization of design space exploration is important for the final convergence, it is difficult to initialize the search for unseen designs effectively. example, various of machine learning (ML) techniques have been used to automatically configure the tool configurations of industrial FPGA toolflow [6,9,10,14,15] and ASIC toolflow [4,5,7,12]. These works focus on end-to-end tool parameter space exploration, which are guided by ML models trained based on either offline [7] or online datasets [6,9].…”
Section: Introductionmentioning
confidence: 99%
“…These works focus on end-to-end tool parameter space exploration, which are guided by ML models trained based on either offline [7] or online datasets [6,9]. Moreover, exploring the sequence of synthesis transformations (also called synthesis flow) in EDA has been studied in an iterative training-exploration fashion through Convolutional Neural Networks (CNNs) [4] and reinforcement learning [5]. While the design quality is very sensitive to the sequence of transformations [4], these approaches are able to learn a sequential decision making strategy to achieve better quality of results [4,5].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation