2015
DOI: 10.1109/jssc.2014.2356197
|View full text |Cite
|
Sign up to set email alerts
|

A 1 TOPS/W Analog Deep Machine-Learning Engine With Floating-Gate Storage in 0.13 µm CMOS

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
57
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 118 publications
(57 citation statements)
references
References 22 publications
0
57
0
Order By: Relevance
“…In addition to the univariate datasets' evaluation, we tested the approaches on 12 Multivariate Time Series (MTS) datasets (Baydogan, 2015). The multivariate evaluation shows another benefit of deep learning models, which is the ability to handle the curse of dimensionality (Bellman, 2010;Keogh and Mueen, 2017) by leveraging different degrees of smoothness in compositional function (Poggio et al, 2017) as well as the parallel computations of the GPUs (Lu et al, 2015).…”
Section: Introductionmentioning
confidence: 99%
“…In addition to the univariate datasets' evaluation, we tested the approaches on 12 Multivariate Time Series (MTS) datasets (Baydogan, 2015). The multivariate evaluation shows another benefit of deep learning models, which is the ability to handle the curse of dimensionality (Bellman, 2010;Keogh and Mueen, 2017) by leveraging different degrees of smoothness in compositional function (Poggio et al, 2017) as well as the parallel computations of the GPUs (Lu et al, 2015).…”
Section: Introductionmentioning
confidence: 99%
“…The majority of such efforts rely on conventional technology [3] [6], e.g. complimentary metal-oxide-semiconductor (CMOS) circuits to implement artificial neurons and dynamic random access memory [7], static random access memory [8] or non volatile floating memory gate memory [9], [10] to implement artificial synapses. Emerging memory device technologies [11], while not yet mature for large scale implementations, could offer further improvements in artificial neural network performance in the future [12].…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning was a killer application that made many-core graphics-core (GPU) based processing mainstream, ending the 50-year dominance of single (or few) core CPU systems. Clearly, deep-learning would be a strong motivation for special purpose hardware and [7] and oscillator based solution would be welcome -we are not aware of efforts to this end.…”
Section: Biologically Inspired Network Models and Deep Learning Nets mentioning
confidence: 99%