2020
DOI: 10.1109/led.2020.2970536
|View full text |Cite
|
Sign up to set email alerts
|

Demonstration of 3D Convolution Kernel Function Based on 8-Layer 3D Vertical Resistive Random Access Memory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(9 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…When there are obstacles, MASK is used to obtain new training samples. e new training sample includes the unoccluded part of the real-time target and the part before it is occluded, and the complete target pixel is saved through the training sample, which reduces the negative impact on occlusion and ensures that there will be no drift during the target tracking process [11][12][13].…”
Section: Improved Kcf Algorithm Fusion Depth Informationmentioning
confidence: 99%
“…When there are obstacles, MASK is used to obtain new training samples. e new training sample includes the unoccluded part of the real-time target and the part before it is occluded, and the complete target pixel is saved through the training sample, which reduces the negative impact on occlusion and ensures that there will be no drift during the target tracking process [11][12][13].…”
Section: Improved Kcf Algorithm Fusion Depth Informationmentioning
confidence: 99%
“…As shown in Section 2, CNN is very successful in image recognition, and the convolutional computing can be demonstrated with memristive crossbar arrays. [149][150][151] Dong et al presented a specific circuit for the CNN with binary or multilevel memristive devices. [152] One kernel was represented by two rows of memristive devices and eight output currents were pooled and activated simultaneously for one value, which was regarded as the input of the FC layer.…”
Section: In Situ Training In Ann Acceleratorsmentioning
confidence: 99%
“…Normal passive memristor crossbar arrays usually suffer from a dilemma between array size and sneak path current, which prevents them from being used to achieve DNNs with low device cost. [ 97,98 ] On the other hand, 2D planar arrays have limited device density and simplified connections compared to 3D arrays, which means 2D arrays are not beneficial to implement the complex topology of DNNs. Therefore, a very recent work by Lin et al has expanded a hardware‐implemented CNN into a tailored eight‐layer 3D memristor array, as shown in Figure 3c,d, to provide a high degree of functional complexity with a relatively negligible array size effect.…”
Section: Memristive Convolutional Acceleratormentioning
confidence: 99%