2017
DOI: 10.1038/srep45233
|View full text |Cite
|
Sign up to set email alerts
|

RRAM-based parallel computing architecture using k-nearest neighbor classification for pattern recognition

Abstract: Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today’s electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 25 publications
0
16
0
Order By: Relevance
“…The SVM tried to find a hyperplane to separate the members from nonmembers of a particular GO family by maximizing the margin defined in protein feature space [ 92 ]. The KNN predicted the class of a protein by the majority vote of its neighbors with a given distance metric [ 93 ], and the PNN was a neural network based on Bayesian decision theory [ 94 ]. These three methods were directly applied under the Python environment.…”
Section: Methodsmentioning
confidence: 99%
“…The SVM tried to find a hyperplane to separate the members from nonmembers of a particular GO family by maximizing the margin defined in protein feature space [ 92 ]. The KNN predicted the class of a protein by the majority vote of its neighbors with a given distance metric [ 93 ], and the PNN was a neural network based on Bayesian decision theory [ 94 ]. These three methods were directly applied under the Python environment.…”
Section: Methodsmentioning
confidence: 99%
“…This is because the synapses in HTM are binary in nature, i.e., they exhibit the same properties if they are above the permanence threshold regardless of the synapse's growth level and vice versa. In 2017, Jiang et al proposed a memristor device to implement the k-nearest neighbour algorithm and that exhibits properties required for HTM [40]. Fig.…”
Section: Experimental Methodologymentioning
confidence: 99%
“…In such scenario, these cells need to have their synaptic strength reduced to lower the likelihood of incorrect prediction (as in lines 19-23). After evaluating the cells' segments, their synaptic connections are updated, which occurs during the learning phase (lines [38][39][40][41][42][43][44][45][46][47].…”
Section: Temporal Memorymentioning
confidence: 99%
“…While the difference of neighbor pixels is performed with the application of the corresponding voltage pulses on word, WL, and bit, BL lines, the inmemory processing across cross-point cells is performed only in ( + )/2 cycles where the crossbar dimension is × ∈ 2 . This is equivalent to the write/read, row by row in classical RRAM crossbar designs [29]. At each cycle, only one cross-point cell per line and per column is selected.…”
Section: B Rram-based In Memory Computing Architecturementioning
confidence: 99%