In this paper we study the computation error tolerance properties of motion estimation algorithms. We are motivated by two scenarios where hardware systems may introduce computation errors. First, we consider hardware faults such as those arising in a typical fabrication process. Second, we consider "soft" errors due to voltage scaling, which can arise when operating at a lower voltage than specified for the system. Current practice is to discard all faulty systems. However there is an increasing interest in tools that can identify faulty systems which provide acceptable performance. We show that motion estimation (ME) algorithms exhibit significant error tolerance in these two scenarios. We propose simple error models and use these to provide insights into what features in these ME algorithms lead to increased error tolerance. Our comparison of the full search ME and a state of the art fast ME approach in the context of H.264/AVC shows that while both techniques are error tolerant, the faster algorithm is in fact more robust to computation errors.
The progress of VLSI technology towards deep sub-micron feature sizes, e.g., sub-100 nanometer technology, has created a growing impact of hardware defects and fabrication process variability, which lead to reductions in yield rate. To address these problems, a new approach, system-level error tolerance (ET), has been recently introduced. Considering that a significant percentage of the entire chip production is discarded due to minor imperfections, this approach is based on accepting imperfect chips that introduce imperceptible/acceptable systemlevel degradation; this leads to increases in overall effective yield. In this paper, we investigate the impact of hardware faults on the video compression performance, with a focus on the motion estimation (ME) process. More specifically, we provide an analytical formulation of the impact of single and multiple stuck-at-faults within ME computation. We further present a model for estimating the system-level performance degradation due to such faults, which can be used for the error tolerance based decision strategy of accepting a given faulty chip. We also show how different faults and ME search algorithms compare in terms of error tolerance and define the characteristics of search algorithm that lead to increased error tolerance. Finally, we show that different hardware architectures performing the same metric computation have different error tolerance characteristics and we present the optimal ME hardware architecture in terms of error tolerance. While we focus on ME hardware, our work could also applied to systems (e.g., classifiers, matching pursuits, vector quantization) where a selection is made among several alternatives (e.g., class label, basis function, quantization codeword) based on which choice minimizes an additive metric of interest.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.