2022
DOI: 10.1007/s00034-022-02237-7
|View full text |Cite
|
Sign up to set email alerts
|

An On-Chip Trainable and Scalable In-Memory ANN Architecture for AI/ML Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…Therefore, the accuracy can be expected high. The paper 10) proposed an on-chip training which is implemented by using SRAM. However, this approach may involve the addition of complex circuitry to handle backpropagation and weight updates, leading to a substantial increase in circuit size.…”
Section: On-chip Training (Conventional)mentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, the accuracy can be expected high. The paper 10) proposed an on-chip training which is implemented by using SRAM. However, this approach may involve the addition of complex circuitry to handle backpropagation and weight updates, leading to a substantial increase in circuit size.…”
Section: On-chip Training (Conventional)mentioning
confidence: 99%
“…To improve the accuracy, several training schemes have been proposed. They include the on-chip training 10) and the in situ training [11][12][13] which seek to optimize models directly on the specialized hardware. However, despite the expectation, these methods have struggled with challenges such as the memory device limitations like endurance, 14) nonlinearity, and asymmetry 15) in switching.…”
Section: Introductionmentioning
confidence: 99%