2015
DOI: 10.1007/s11063-015-9430-9
|View full text |Cite
|
Sign up to set email alerts
|

Stacked Autoencoders Using Low-Power Accelerated Architectures for Object Recognition in Autonomous Systems

Abstract: This paper investigates low-energy consumption and low-power hardware models and processor architectures for performing the real-time recognition of objects in power-constrained autonomous systems and robots. Most recent developments show that convolutional deep neural networks are currently the state-of-the-art in terms of classification accuracy. In this article we propose to use of a different type of deep neural network-stacked autoencoders-and show that within a limited number of layers and nodes, for acc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(36 citation statements)
references
References 9 publications
0
36
0
Order By: Relevance
“…An autoencoder (AE) is a network that tries to produce at the output what is presented in the input [2]. The most basic AE is a multi-layered perceptron that has one hidden and one output layer, such that the weight matrix of the last layer is the transpose of the weight matrix of the hidden layer (clamped weights) and the number of output neurons is equal to the number of inputs.…”
Section: A Stacked Autoencodersmentioning
confidence: 99%
See 1 more Smart Citation
“…An autoencoder (AE) is a network that tries to produce at the output what is presented in the input [2]. The most basic AE is a multi-layered perceptron that has one hidden and one output layer, such that the weight matrix of the last layer is the transpose of the weight matrix of the hidden layer (clamped weights) and the number of output neurons is equal to the number of inputs.…”
Section: A Stacked Autoencodersmentioning
confidence: 99%
“…In this paper we propose a scalable parallel solution for stacked autoencoder (SAE) architectures in mobile GPUs, that allow providing to small autonomous robots/vehicles deep learning capabilites. The paper builds upon [2] as a first step towards the implementation of more complex approaches to deep learning, such as CNNs, so as to understand the possible gains in terms of energy savings, as well as comprehend the limitations at hardware and software levels.…”
Section: Introductionmentioning
confidence: 99%
“…Deep Autoencoders (DAEs) have been proposed [15] to improve the reconstruction robustness in presence of noise affecting input data. Interesting results have been obtained with DAEs in domains such as medicine [16], biology [17], image processing [18], motions [19], music [20] and speech [21]. These DAEs use the same vector for representing inputs and outputs.…”
Section: Related Work and Motivationsmentioning
confidence: 99%
“…The use of Deep Learning (DL) methods, which are the multi-layered structure of ANNs along with the improvements of GPU technology, have accelerated these advances. Furthermore, DL approaches have significantly outperformed state-of-the-art approaches in many fields such as object recognition [1,3,7,9,25,26], image processing [11,[27][28][29][30][31][32], computer vision [33][34][35][36], speech recognition [37][38][39], natural language processing (NLP) [10,21,27,[40][41][42], character recognition [5,30,[43][44][45][46], signature verification [2,6,[47][48][49][50][51]. Although the foundations of DL were based on ANN proposed by McCulloch and Pitts in 1943 [52], the real popularity has increased in 2012.…”
Section: Introduction (Gi̇ri̇ş)mentioning
confidence: 99%