2022
DOI: 10.1109/tnnls.2021.3071762
|View full text |Cite
|
Sign up to set email alerts
|

A Survey of Deep Learning on CPUs: Opportunities and Co-Optimizations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(11 citation statements)
references
References 73 publications
0
11
0
Order By: Relevance
“…In academia, while many purpose-built accelerators [6,9,24,46] have been proposed, including recent efforts targeting sparsity [18], they pay insufficient attention to software integration or rapid workload evolution, leading to quick obsolescence, alluded to by [26]. Mittal et al [39] presents a survey of deep-learning on CPUs where the SW lift is ameliorated somewhat. Cambricon [38] is of interest, since it exposes linear algebra concepts as architecturally supported primitives -similar to how RED exposes new primitives at the ISA level, but lacks a performance contract between the implementation and programmer.…”
Section: Related Workmentioning
confidence: 99%
“…In academia, while many purpose-built accelerators [6,9,24,46] have been proposed, including recent efforts targeting sparsity [18], they pay insufficient attention to software integration or rapid workload evolution, leading to quick obsolescence, alluded to by [26]. Mittal et al [39] presents a survey of deep-learning on CPUs where the SW lift is ameliorated somewhat. Cambricon [38] is of interest, since it exposes linear algebra concepts as architecturally supported primitives -similar to how RED exposes new primitives at the ISA level, but lacks a performance contract between the implementation and programmer.…”
Section: Related Workmentioning
confidence: 99%
“…6), it seems that SDG is not the only tool that can help avoid saddle points. Uncertainty in GPUs may also act as a tool for avoiding saddle points, although we have not been able to know this until now because neural networks are rarely studied on CPUs (Mittal et al, 2021). Training was performed under condition 3 (cuDNN = on; initial value = fixed) using CPU (i7 6700K) or GPU (Titan RTX).…”
Section: Uncertainty and Deep Neural Networkmentioning
confidence: 99%
“…Although GPUs are the main hardware platforms in deep learning fields, there are many factors to motivate CNNs running on resource-constrained systems including mobile devices (computational and energy constraints) and CPU-based servers (computational constraints relative to popular GPUs) [18]. In mobile computing systems, CPUs maybe perform better than GPU in terms of performance and power consumption.…”
Section: Introductionmentioning
confidence: 99%