Proceedings of the 2020 on Great Lakes Symposium on VLSI 2020
DOI: 10.1145/3386263.3407649
|View full text |Cite
|
Sign up to set email alerts
|

A Review of In-Memory Computing Architectures for Machine Learning Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(10 citation statements)
references
References 15 publications
0
10
0
Order By: Relevance
“…‘Create ML’ uses transfer learning [ 57 ], capable of applying an existing model (trained with a dataset relevant to one problem) to a completely new problem. The macOS operating system already has extensive machine learning models [ 57 , 62 ] that were created by Apple. ‘Create ML’ uses their patterns to perform a new training using previously extracted features.…”
Section: Methodsmentioning
confidence: 99%
“…‘Create ML’ uses transfer learning [ 57 ], capable of applying an existing model (trained with a dataset relevant to one problem) to a completely new problem. The macOS operating system already has extensive machine learning models [ 57 , 62 ] that were created by Apple. ‘Create ML’ uses their patterns to perform a new training using previously extracted features.…”
Section: Methodsmentioning
confidence: 99%
“…Mandal et al [45] introduce a custom network-on-chip and scheduling method, which reduces the communication latency by 20%-80%. More detailed surveys of the application of IMC to deep learning can be found in [49] and [4]. The successful application of NMC and IMC in deep learning suggests that it will also be useful for deep reinforcement learning in the future.…”
Section: Near-and In-memory Computingmentioning
confidence: 99%
“…With an in-memory computation system, the bottleneck and extra power barrier to achieving high bandwidth data transfer between the external memory chip and the processor are significantly minimized using the non-von Neumann architecture [9]. The application of non-volatile memory device technologies such as resistive-switching random access memory (RRAM), phase-change memory (PCM), magnetic random-access memory (MRAM), and ferroelectric random-access memory (FeRAM) are studied for in-memory applications [10]. Here, we intend to study further the application of lower power oxygen vacancybased RRAM for in-memory circuits used for edge-based training to build AI models for the autonomous system.…”
Section: Introductionmentioning
confidence: 99%