The goal of this work is the design and implementation of a low-cost system-on-FPGA for handwritten digit recognition, based on a relatively deep and wide network of perceptrons. In order to increase the performance of the application on embedded processors whose performances are way below standard general purpose CPUs, a regularization method was used during the training phase of the neural network that allows for the drastic reduction of floating point operations. Our implementation can achieve a 3× speed-up toward a raw implementation without optimization, while keeping the accuracy in acceptable ranges. Our efforts reinforce the fact that FPGAs are suited for deploying complex artificial intelligence modules.
We present a framework for fast prototyping of embedded video applications. Starting with a high-level executable specification written in OpenCV, we apply semi-automatic refinements of the specification at various levels (TLM and RTL), the lowest of which is a system-on-chip prototype in FPGA. The refinement leverages the structure of image processing applications to map high-level representations to lower level implementation with limited user intervention. Our framework integrates the computer vision library OpenCV for software, SystemC/TLM for high-level hardware representation, UVM and QEMU-OS for virtual prototyping and verification into a single and uniform design and verification flow. With applications in the field of driving assistance and object recognition, we prove the usability of our framework in producing performance and correct design.
Tracking several objects across multiple cameras is essential for collaborative monitoring in distributed camera networks. The tractability of the related optimization aiming at tracking a maximal number of important targets, decreases with the growing number of objects moving across cameras. To tackle this issue, a viable model and sound object representation, which can leverage the power of existing tool at run-time for a fast computation of solution, is required.In this paper, we provide a formalism to object tracking across multiple cameras. A first assignment of objects to cameras is performed at start-up to initialize a set of distributed trackers in embedded cameras. We model the runtime self-coordination problem with target handover by encoding the problem as a run-time binding of objects to cameras. This approach has successively been used in high-level system synthesis. Our model of distributed tracking is based on Answer Set Programming, a declarative programming paradigm, that helps formulate the distribution and target handover problem as a search problem, such that by using existing answer set solvers, we produce stable solutions in real-time by incrementally solving time-based encoded ASP problems. The effectiveness of the proposed approach is proven on a 3-node camera network deployment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.