2022
DOI: 10.1088/2634-4386/ac62db
|View full text |Cite
|
Sign up to set email alerts
|

On-chip learning of a domain-wall-synapse-crossbar-array-based convolutional neural network

Abstract: Domain-wall-synapse-based crossbar arrays have been shown to be very efficient, in terms of speed and energy consumption, while implementing fully connected neural network (FCNN) algorithms for simple data-classification tasks, both in inference and on-chip-learning modes. But for more complex and realistic data-classification tasks, convolutional neural networks (CNN) need to be trained through such crossbar arrays. In this paper, we carry out device-circuit-system co-design and co-simulation of on-chip lear… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 60 publications
0
8
0
Order By: Relevance
“…Recently, in some other reports, it has been shown that synaptic noise during training can improve classification accuracy, , and it has also happened here in the case of Fashion-MNIST. It is to be noted that in all the cases presented here that involve two devices per synapse cell, the weight of the second device is updated by one bit when that of the first device reaches its maximum or minimum value …”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, in some other reports, it has been shown that synaptic noise during training can improve classification accuracy, , and it has also happened here in the case of Fashion-MNIST. It is to be noted that in all the cases presented here that involve two devices per synapse cell, the weight of the second device is updated by one bit when that of the first device reaches its maximum or minimum value …”
Section: Resultsmentioning
confidence: 99%
“…It is to be noted that in all the cases presented here that involve two devices per synapse cell, the weight of the second device is updated by one bit when that of the first device reaches its maximum or minimum value. 70 Estimating Energy and Speed for On-Chip Learning. We next estimate the energy and speed for on-chip learning of an FCNN on crossbar arrays based on our spintronic synapses.…”
Section: ■ Introductionmentioning
confidence: 99%
“…The TI to be used in realistic SOT-magnetic random-access memory (MRAM) must have a high spin hall angle and high conductivity thus, BiSb solves both conditions [19]. The free layer contains the weights corresponding to the conductance of DW which can be tuned by either a write current pulse or a magnetic field [4]. The synaptic characteristics of DW are linear and symmetric which makes the designing of the peripheral circuit very simple [12].…”
Section: Current-induced Dw Synaptic Devicementioning
confidence: 99%
“…Among spintronic devices, magnetic domain wall (DW) structures require low driving current and offer high storage capacity [2]. These also find applications in implementing energy-efficient artificial neural networks (ANN) because of their superior performance in terms of speed and energy efficiency crossbar solution for solving image classification, speech recognition, machine translation, and other problems faster [3,4]. Moreover, ANNs are computational units that have advanced significantly in recent years and are a subset of artificial intelligence (AI) to solve classification and prediction problems [5].…”
Section: Introductionmentioning
confidence: 99%
“…Several methods have been adopted in the literature to address these issues. Multiple devices per synapse have been used to increase the precision and address non-linearity of the conductance response [41][42][43]. Bit-slicing technique [44] is used to slice the input and weight matrices into several smaller bit slices.…”
mentioning
confidence: 99%