Real-time visual identification and tracking of objects is a computationally intensive task, particularly in cluttered environments which contain many visual distracters. In this paper we describe a real-time bio-inspired system for object tracking and identification which combines an event-based vision sensor with a convolutional neural network running on FPGA for recognition. The event-based vision sensor detects only changes in the scene, naturally responding to moving objects and ignoring static distracters in the background. We present operation of the system for two tasks. The first is proof of concept for a remote monitoring application in which the system tracks and distinguishes between cars, bikes, and pedestrians on a road. The second task targets application to grasp planning for an upper limb prosthesis and involves detecting and identifying household objects, as well as determining their orientation relative to the camera. The second task is used to quantify performance of the system, which can discriminate between 8 different objects in 2.25 ms with accuracy of 99.10% and is able to determine object orientation with ±4.5 • accuracy in an additional 2.28 ms with accuracy of 97.76%.
Force myography has been proposed as an appealing alternative to electromyography for control of upper limb prosthesis. A limitation of this technique is the non-stationary nature of the recorded force data. Force patterns vary under influence of various factors such as change in orientation and position of the prosthesis. We hereby propose an incremental learning method to overcome this limitation. We use an online sequential extreme learning machine where occasional updates allow continual adaptation to signal changes. The applicability and effectiveness of this approach is demonstrated for predicting the hand status from forearm muscle forces at various arm positions. The results show that incremental updates are indeed effective to maintain a stable level of performance, achieving an average classification accuracy of 98.75% for two subjects.
Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%.
During the last decade significant advances have been made in vibrotactile actuator design that are leading to the development of novel haptic technologies. Similarly, important innovations have been made in the area of virtual reality for scene rendering and user tracking. However, the integration of these technologies has not been well explored. In this paper, we outline a broad design philosophy and integration plan of these tools. In addition, we give an overview of applications for such a cohesive set of technologies. Preliminary results are provided to demonstrate their critical importance and future widespread use
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.