2020
DOI: 10.1126/sciadv.aay2378
|View full text |Cite
|
Sign up to set email alerts
|

One-step regression and classification with cross-point resistive memory arrays

Abstract: One Sentence Summary: Machine learning algorithms such as linear regression and logisticregression are trained in one step with crosspoint resistive memory arrays.2 Abstract: Machine learning has been getting a large attention in the recent years, as a tool to process big data generated by ubiquitous sensors in our daily life. High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge, i.e., without the support of a remote frame server in the cloud. Such req… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
68
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

3
6

Authors

Journals

citations
Cited by 82 publications
(68 citation statements)
references
References 44 publications
0
68
0
Order By: Relevance
“…Although the local resistive switching effect in memristive devices provides the unique compactness, fast and energyefficient operation of passive memristive arrays (Xia and Yang, 2019), the active arrays integrated with peripheral and control electronics should be always a subject of explicit evaluation and benchmarking depending on the development/prototyping stage (Cai et al, 2019;Zhao et al, 2020). Recently, several reports on such benchmarking have shown potential advantages of memristive chips over conventional ones: 19.7, 6.5 times, and 2 orders of magnitude better energy efficiency compared to the Google's tensor processing unit (TPU), a highly optimized application-specific integrated circuit (ASIC) system, and the state-of-the-art graphics-processing unit (GPU), respectively (Sun et al, 2020;Yao et al, 2020). The performance benchmark of memristive neuromorphic computing system shows 110 times better energy efficiency and 30 times better performance density compared to Tesla V100 GPU.…”
Section: Cmos Circuits: On-chip Analog and Digital Systemsmentioning
confidence: 99%
“…Although the local resistive switching effect in memristive devices provides the unique compactness, fast and energyefficient operation of passive memristive arrays (Xia and Yang, 2019), the active arrays integrated with peripheral and control electronics should be always a subject of explicit evaluation and benchmarking depending on the development/prototyping stage (Cai et al, 2019;Zhao et al, 2020). Recently, several reports on such benchmarking have shown potential advantages of memristive chips over conventional ones: 19.7, 6.5 times, and 2 orders of magnitude better energy efficiency compared to the Google's tensor processing unit (TPU), a highly optimized application-specific integrated circuit (ASIC) system, and the state-of-the-art graphics-processing unit (GPU), respectively (Sun et al, 2020;Yao et al, 2020). The performance benchmark of memristive neuromorphic computing system shows 110 times better energy efficiency and 30 times better performance density compared to Tesla V100 GPU.…”
Section: Cmos Circuits: On-chip Analog and Digital Systemsmentioning
confidence: 99%
“…Solving the memory bottleneck requires a paradigm shift in architecture, where computation is executed in situ within the data by exploiting, e.g., the ability of memory arrays to implement matrix-vector multiplication (MVM) [10,11]. This novel architectural approach is referred to as in-memory computing, which provides the basis for several outstanding applications, such as pattern classification [12,13], analogue image processing [14], and the solution of linear systems [15,16] and of linear regression problems [17].…”
Section: Introductionmentioning
confidence: 99%
“…Now it is widely recognized that this problem could be well addressed by the idea of implementing vector matrix multiplication (VMM) in a memristive crossbar or one‐transistor‐one‐resistor (1T1R) memristive array, [ 6,7 ] which could be embedded as the key block in the mobile and Internet‐of‐things (IoT) hardware. This idea has been demonstrated to accelerate a set of algorithms and solve matrix equations, [ 8–10 ] such as multi‐layer perceptron, [ 11,12 ] CNN, [ 13,14 ] sparse coding, [ 12,15 ] k‐means, [ 16 ] with improved speed and energy efficiency. A very recent work demonstrated in‐memory eigenvector calculation in one step to accelerate the PageRank algorithm for search engines.…”
Section: Introductionmentioning
confidence: 99%