This paper investigates low-energy consumption and low-power hardware models and processor architectures for performing the real-time recognition of objects in power-constrained autonomous systems and robots. Most recent developments show that convolutional deep neural networks are currently the state-of-the-art in terms of classification accuracy. In this article we propose to use of a different type of deep neural network-stacked autoencoders-and show that within a limited number of layers and nodes, for accommodating the use of low-power accelerators such as mobile GPUs and FPGAs, we are still able to achieve both classification levels not far from the state-of-the-art and a high number of processed frames per second. We present experiments using the color CIFAR-10 dataset. This enables the adaptation of the architecture to a live feed camera. Another novelty equally proposed for the first time in this work suggests that the training phase can also be performed in these low-power devices, instead of the usual approach that uses a desktop CPU or a GPU to perform this task and only runs the trained network later on the FPGA. This allows incorporating new functionalities as, for example, a robot performing runtime learning.
This paper addresses the use of computer-aided diagnosis (CAD) system for the cataract classification based on ultrasound technique. Ultrasound A-scan signals were acquired in 220 porcine lenses. B-mode and Nakagami images were constructed. Ninety-seven parameters were extracted from acoustical, spectral and image textural analyses and were subjected to feature selection by Principal Component Analysis (PCA). Bayes, K Nearest-Neighbors (KNN), Fisher Linear Discriminant (FLD) and Support Vector Machine (SVM) classifiers were tested. The classification of healthy and cataractous lenses shows a good performance for the four classifiers (F-measure ≥92.68%) with SVM showing the highest performance (90.62%) for initial versus severe cataract classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.