2020
DOI: 10.1109/jproc.2020.3001748
|View full text |Cite
|
Sign up to set email alerts
|

Nonsilicon, Non-von Neumann Computing—Part II

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…With the apparent saturation of the progress in digital computers, new types of computers based on nonsilicon physical systems are highly anticipated. Unlike current digital computers based on Turing machine procedures, these computers use time evolution of physical systems to perform tasks such as speech and image recognition, data mining, and optimization (1). On the basis of a computing paradigm of "let physics do computation," they include quantum computers (2), quantum annealers (3), neural networks (4), and reservoir computers (5), implemented with various physical systems such as superconducting qubits (6,7), trapped ions (8), and photonics (9)(10)(11)(12).…”
Section: Introductionmentioning
confidence: 99%
“…With the apparent saturation of the progress in digital computers, new types of computers based on nonsilicon physical systems are highly anticipated. Unlike current digital computers based on Turing machine procedures, these computers use time evolution of physical systems to perform tasks such as speech and image recognition, data mining, and optimization (1). On the basis of a computing paradigm of "let physics do computation," they include quantum computers (2), quantum annealers (3), neural networks (4), and reservoir computers (5), implemented with various physical systems such as superconducting qubits (6,7), trapped ions (8), and photonics (9)(10)(11)(12).…”
Section: Introductionmentioning
confidence: 99%
“…With these limitations in mind, the present system, and similar alternatives based on FPGAs, should be considered primarily of relevance whenever the reconfigurability of programmable logic together with the fine-grained data set memory subdivision confers a possible advantage, for example, towards the realization of distributed sorting and aggregation. Future work should systematically compare this and other FPGA-based solutions to CPUs and GPUs across different types of machine learning algorithms [9], [10], [12], [13], [17], [18], [22]- [29].…”
Section: Discussionmentioning
confidence: 99%
“…For example, Graphics Processing Units (GPUs) are, by design, highly efficient at the arithmetic operations required by several distance measures, such the 2 -norm; as such, they are receiving considerable attention for use as co-processors accelerating vector similarity searching, operating alone or conjointly with the CPU. Their development has been considerably boosted by applications in convolutional neural networks and deep learning; however, even compute-optimized implementations retain several architecture choices stemming from the original graphics applications, which frequently translate into a high power consumption [3], [9]- [12].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, the same crossbar can be used for "compute-inmemory" processing of certain key computations of a DNN. Integrating storage and computations within the same structure allows memristor crossbars to supersede conventional digital accelerators where limited memory-processor bandwidth becomes the key bottleneck for performance scaling (Chen et al, 2016;Basu et al, 2018;Kim et al, 2020).…”
Section: Introductionmentioning
confidence: 99%