2020
DOI: 10.1145/3418498
|View full text |Cite
|
Sign up to set email alerts
|

Core Placement Optimization for Multi-chip Many-core Neural Network Systems with Reinforcement Learning

Abstract: Multi-chip many-core neural network systems are capable of providing high parallelism benefited from decentralized execution, and they can be scaled to very large systems with reasonable fabrication costs. As multi-chip many-core systems scale up, communication latency related effects will take a more important portion in the system performance. While previous work mainly focuses on the core placement within a single chip, there are two principal issues still unresolved: the communication-related problems caus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 72 publications
0
12
0
Order By: Relevance
“…Targeting the heat interaction of processor cores and NoC routers, Lu et al [147] apply Q-learning to perform task assignments to specific cores based on current temperatures of cores and routers, such that the maximum temperature in the future is minimized. Targeting the non-uniform and hierarchical on/off-chip communication capability in multi-chip many-core systems, core placement optimization [242] leverages deep deterministic policy gradient (DDPG) [140] to map computation onto physical cores, able to work in a manner agnostic to domain-specific information.…”
Section: Resourcementioning
confidence: 99%
“…Targeting the heat interaction of processor cores and NoC routers, Lu et al [147] apply Q-learning to perform task assignments to specific cores based on current temperatures of cores and routers, such that the maximum temperature in the future is minimized. Targeting the non-uniform and hierarchical on/off-chip communication capability in multi-chip many-core systems, core placement optimization [242] leverages deep deterministic policy gradient (DDPG) [140] to map computation onto physical cores, able to work in a manner agnostic to domain-specific information.…”
Section: Resourcementioning
confidence: 99%
“…It has also been used for designing memory systems, such as prefetching [52] and memory controller [53]. Additionally, it has been applied to DNN compilation and mapping optimization [54,55,56]. In this work, we use RL for co-exploration of data and computation mapping in NMP systems.…”
Section: Reinforcement Learning (Rl)mentioning
confidence: 99%
“…To overcome the memory wall problem, the decentralized manycore architecture emerges in recent years for performing neural network workloads, which presents massive processing parallelism, memory locality, and multi-chip scalability (Painkras et al, 2013 ; Akopyan et al, 2015 ; Han et al, 2016 ; Jouppi et al, 2017 ; Parashar et al, 2017 ; Shin et al, 2017 ; Davies et al, 2018 ; Chen et al, 2019 ; Pei et al, 2019 ; Shao et al, 2019 ; Deng et al, 2020 ; Zimmer et al, 2020 ). Each functional core contains independent computation and memory resources with close distance, and cores communicate through a flexible routing fabric (Wu et al, 2020 ). Due to the limited hardware resources in each core, a large neural network model has to be partitioned and mapped onto cores during deployment.…”
Section: Introductionmentioning
confidence: 99%
“…In the logical mapping stage, the requirements for computation and memory resources are important consideration factors for allocating cores. The parameters and the associated computations are divided into small slices through tensor dimension partition and each slice is allocated into a single core with limited hardware resources (Shao et al, 2019 ; Deng et al, 2020 ; Wu et al, 2020 ). For a convolutional layer, most previous work adopts the 2D partition to split the input channel ( C in ) and the output channel ( C out ) dimensions.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation