Gabor wavelets (GWs) have been commonly used for extracting local features for various applications, such as recognition, tracking, and edge detection. However, extracting the Gabor features is computationally intensive, so the features may be impractical for real-time applications. In this paper, we propose a set of simplified version of GWs (SGWs) and an efficient algorithm for extracting the features for edge detection. Experimental results show that our SGW-based edge-detection algorithm can achieve a similar performance level to that using GWs, while the runtime required for feature extraction using SGWs is faster than that with GWs with the use of the fast Fourier transform. When compared to the Canny and other conventional edge-detection methods, our proposed method can achieve a better performance in the terms of detection accuracy and computational complexity.
Optical lenses are only able to focus a single scene plane onto the sensor, leaving the remainder of the scene subject to varying levels of defocus. The apparent depth of field can be extended by capturing a sequence with varying focal planes that is merged by selecting, for each pixel in the target image, the most focused corresponding pixel from the stack. This process is heavily dependent on capturing a stabilised sequence-a requirement that is impractical for hand-held cameras. Here we have developed a novel method that can merge a focus stack captured by a hand-held camera despite changes in shooting position and focus. Our approach is able to register the sequence using affine transformation before fusing the focus stack. We have developed a merging process that is able to identify the focused pixels for each pixel in the stack and therefore select the most appropriate pixels for the synthetically focused image. We have proposed a novel approach for capturing qualified focus stack on mobile phone cameras. Furthermore, we test our approach on a mobile phone platform that can automatically capture a focus stack as easily as a photographer capturing a conventional image.
An efficient fast search motion estimation algorithm is highly desired in many video coding applications. Most previous fast algorithms are performed in a blind way, without critically making use of the characteristics of motion information of the sequence being coded. In this paper, we propose a new fast search motion estimation algorithm, which uses the directional information of the selected checking points obtained in the search procedure to guide the search. The statistical information which features the motion activities of the blocks in the previous frame is used to predict the characteristics of the motion activities of the blocks in the current frame. This is to save the computational cost and to avoid spending effort on the blocks which will unlikely provide the optimal match. Hence, the computational resource can be re-assigned to locations that deserve to be searched more than others. Extensive experimental work has been done, results of which show that our approach gives a speedup of 5.39 to 1.14 times over that of the recent fast algorithms and 150 times over the exhaustive full search algorithm on average, with a negligible degradation in peak signalto-noise ratio (PSNR).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.