2023
DOI: 10.1145/3575798
|View full text |Cite
|
Sign up to set email alerts
|

CODEBench: A Neural Architecture and Hardware Accelerator Co-Design Framework

Abstract: Recently, automated co-design of machine learning (ML) models and accelerator architectures has attracted significant attention from both the industry and academia. However, most co-design frameworks either explore a limited search space or employ suboptimal exploration techniques for simultaneous design decision investigations of the ML model and the accelerator. Furthermore, training the ML model and simulating the accelerator performance is computationally expensive. To address these limitations, this work … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 51 publications
0
4
0
Order By: Relevance
“…The same authors further extended their work and created FBNetV2, 55 a new approach able to optimize both spatial (height and width) and channel dimensions of layers in CNNs while being more memory-efficient and incurring lower computational costs compared to its predecessor. Other methods worth mentioning here are ProxylessNAS 56 and FLASH 57 …”
Section: Related Workmentioning
confidence: 99%
“…The same authors further extended their work and created FBNetV2, 55 a new approach able to optimize both spatial (height and width) and channel dimensions of layers in CNNs while being more memory-efficient and incurring lower computational costs compared to its predecessor. Other methods worth mentioning here are ProxylessNAS 56 and FLASH 57 …”
Section: Related Workmentioning
confidence: 99%
“…The LP mode for AccelTran-Edge is also considered. proach [52] could also efficiently test different buffer sizes, along with the corresponding ratios that may be optimal for each transformer model. We defer this automated co-design method to future work.…”
Section: Design Space Explorationmentioning
confidence: 99%
“…The second line of work uses BO to speed up the design of deep neural network (DNN) accelerators [240], [246]- [249]. DNNs have gained significant attention due to their successes in various areas at the expense of high computation and memory cost.…”
Section: Vae Converts C Variables To D [99]mentioning
confidence: 99%
“…The design of a DNN accelerator typically involves hardware (e.g., custom circuits for matrix multiplication) and software (e.g., efficient data-parallel processing algorithm) co-design. The co-design problem is framed as an optimization in the joint space of hardware architectures and software components, and BO has been demonstrated to achieve satisfactory results in this context [240], [246]- [249].…”
Section: Vae Converts C Variables To D [99]mentioning
confidence: 99%