Abstract-Currently, state-of-the-art motor intention decoding algorithms in brain-machine interfaces are mostly implemented on a PC and consume significant amount of power. A machine learning co-processor in 0.35-µm CMOS for the motor intention decoding in the brain-machine interfaces is presented in this paper. Using Extreme Learning Machine algorithm and lowpower analog processing, it achieves an energy efficiency of 3.45 pJ/MAC at a classification rate of 50 Hz. The learning in second stage and corresponding digitally stored coefficients are used to increase robustness of the core analog processor. The chip is verified with neural data recorded in monkey finger movements experiment, achieving a decoding accuracy of 99.3% for movement type. The same co-processor is also used to decode time of movement from asynchronous neural spikes. With time-delayed feature dimension enhancement, the classification accuracy can be increased by 5% with limited number of input channels. Further, a sparsity promoting training scheme enables reduction of number of programmable weights by ≈ 2X.
In this paper, we describe a compact low-power high-performance hardware implementation of extreme learning machine for machine learning applications. Mismatches in current mirrors are used to perform the vector-matrix multiplication that forms the first stage of this classifier and is the most computationally intensive. Both regression and classification (on UCI data sets) are demonstrated and a design space tradeoff between speed, power, and accuracy is explored. Our results indicate that for a wide set of problems, σ V T in the range of 15-25 mV gives optimal results. An input weight matrix rotation method to extend the input dimension and hidden layer size beyond the physical limits imposed by the chip is also described. This allows us to overcome a major limit imposed on most hardware machine learners. The chip is implemented in a 0.35-μm CMOS process and occupies a die area of around 5 mm × 5 mm. Operating from a 1 V power supply, it achieves an energy efficiency of 0.47 pJ/MAC at a classification rate of 31.6 kHz.Index Terms-Classifier, extreme learning machine (ELM), low power, machine learning, neural networks. 1063-8210Enyi Yao received the B.Eng. degree from the Harbin Institute of Technology, Harbin, China, in 2011. He is currently pursuing the Ph.D. degree in electrical and electronic engineering with the Nanyang Technological University, Singapore.His current research interests include low power analog, mixed-signal IC design, neuromorphic circuits design, and low power smart sensor circuits design for biomedical applications.Arindam Basu received the B.Tech. and M.Tech.
A machine learning co-processor in 0.35µm CMOS for motor intention decoding in the brain-machine interfaces is presented in this paper. Using Extreme Learning Machine algorithm, time delayed sample based feature dimension enhancement, low-power analog processing and massive parallelism, it achieves an energy efficiency of 290 GMACs/W at a classification rate of 50 Hz. A portable external unit based on the proposed co-processor is verified with neural data recorded in monkey finger movements experiment, achieving a decoding accuracy of 99.3%. With time-delayed feature dimension enhancement, the classification accuracy can be increased by 5% with limited number of input channels.978-1-4799-8391-9/15/$31.00 ©2015 IEEE
In this paper, we describe a novel low power, compact, current-mode spike detector circuit for real-time neural recording systems where neural spikes or action potentials (AP) are of interest. Such a circuit can enable massive compression of data facilitating wireless transmission. This design can generate a high signal-to-noise ratio (SNR) output by approximating the popularly used nonlinear energy operator (NEO) through standard analog blocks. We show that a low pass filter after the NEO can be used for two functions-(i) estimate and cancel low frequency interference and (ii) estimate threshold for spike detection. The circuit is implemented in a 65 nm CMOS process and occupies 200 μm × 150 μ m of chip area. Operating from a 0.7 V power supply, it consumes about 30 nW of static power and 7 nW of dynamic power for 100 Hz input spike rate making it the lowest power consuming spike detector reported so far.
We demonstrate a low-power and compact hardware implementation of Random Feature Extractor (RFE) core. With complex tasks like Image Recognition requiring a large set of features, we show how weight reuse technique can allow to virtually expand the random features available from RFE core. Further, we show how to avoid computation cost wasted for propagating "incognizant" or redundant random features. For proof of concept, we validated our approach by using our RFE core as the first stage of Extreme Learning Machine (ELM)-a two layer neural network-and were able to achieve > 97% accuracy on MNIST database of handwritten digits. ELM's first stage of RFE is done on an analog ASIC occupying 5mm×5mm area in 0.35µm CMOS and consuming 5.95 µJ/classify while using ≈ 5000 effective hidden neurons. The ELM second stage consisting of just adders can be implemented as digital circuit with estimated power consumption of 20.9 nJ/classify. With a total energy consumption of only 5.97 µJ/classify, this low-power mixed signal ASIC can act as a co-processor in portable electronic gadgets with cameras.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.