2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8852476
|View full text |Cite
|
Sign up to set email alerts
|

AX-DBN: An Approximate Computing Framework for the Design of Low-Power Discriminative Deep Belief Networks

Abstract: The power budget for embedded hardware implementations of Deep Learning algorithms can be extremely tight. To address implementation challenges in such domains, new design paradigms, like Approximate Computing, have drawn significant attention. Approximate Computing exploits the innate error-resilience of Deep Learning algorithms, a property that makes them amenable for deployment on low-power computing platforms. This paper describes an Approximate Computing design methodology, AX-DBN, for an architecture bel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…The concept of using an iterative pruning schedule to alleviate the impact of abruptly removing neurons has been explored in prior work [11,23,37,38]; however, they iteratively introduce sparsity globally at each step. Because these pruning schedules gradually introduce sparsity according to a per-step heuristic, we refer to this class of algorithms as "stepwise" pruning.…”
Section: Inputmentioning
confidence: 99%
See 1 more Smart Citation
“…The concept of using an iterative pruning schedule to alleviate the impact of abruptly removing neurons has been explored in prior work [11,23,37,38]; however, they iteratively introduce sparsity globally at each step. Because these pruning schedules gradually introduce sparsity according to a per-step heuristic, we refer to this class of algorithms as "stepwise" pruning.…”
Section: Inputmentioning
confidence: 99%
“…Pruning (i.e., the process of removing identified redundant elements from a neural network) and quantization (i.e., the processing of reducing their precision) are often considered to be independent problems [7,8]; however, recent work has begun to study the application of both in either a joint [2,6,9,10] or unified [11,12] setting. Unified algorithms typically use mixed precision quantization and integrate pruning by reducing the precision of an element (or a set of elements) to 0.…”
Section: Introductionmentioning
confidence: 99%
“…In an effort to further reduce processing requirements, some RFML implementations have also embedded traditional signal processing techniques such as Fourier and wavelet transforms, cyclostationary feature estimators, and other expert features directly into the NN [170], [174], [175]. Meanwhile, other research has focused on reduced precision implementations of NNs, enabling a path towards real-time implementation [176]- [178]. However, reducing real-time computational resources to mobile systems remains a challenge that must be overcome, especially if online learning techniques are to be developed for future RFML systems [179], [180].…”
Section: A Size Weight and Power (Swap)mentioning
confidence: 99%
“…Further, some RFML implementations incorporate pre-calculated traditional signal processing techniques such as Fourier and wavelet transforms, cyclostationary feature estimators, and other expert features to serve as a more efficient feature that may be merged with machine learned behaviors [241], [245], [246]. Other research has focused on reduced precision implementations of machine learning structures as a method to gain computational efficiency [247]- [249]. However, the use of online learning techniques in RF scenarios requires real-time computational resources that are currently difficult to reduce to a mobile system [250], [251], in addition to the challenges discussed in Section III.…”
Section: Deploymentmentioning
confidence: 99%