In this paper, two block-based texture methods are proposed for content-based image retrieval (CBIR). The approaches use the Local Binary Pattern (LBP) texture feature as the source of image description. The first method divides the query and database images into equally sized blocks from which LBP histograms are extracted. Then the block histograms are compared using a relative L1 dissimilarity measure based on the Minkowski distances. The second approach uses the image division on database images and calculates a single feature histogram for the query. It sums up the database histograms according to the size of the query image and finds the best match by exploiting a sliding search window. The first method is evaluated against color correlogram and edge histogram based algorithms. The second, user interaction dependent approach is used to provide example queries. The experiments show the clear superiority of the new algorithms against their competitors.
In this paper, we introduce a novel real-time tracker based on color, texture and motion information. RGB color histogram and correlogram (autocorrelogram) are exploited as color cues and texture properties are represented by local binary patterns (LBP). Object's motion is taken into account through location and trajectory. After extraction, these features are used to build a unifying distance measure. The measure is utilized in tracking and in the classification event, in which an object is leaving a group. The initial object detection is done by a texture-based background subtraction algorithm. The experiments on indoor and outdoor surveillance videos show that a unified system works better than the versions based on single features. It also copes well with low illumination conditions and low frame rates which are common in large scale surveillance systems.
Thus far the research of print-cam robust watermarking methods has focused on finding new methods for embedding and extracting the watermark. However, the capturing process itself, has been neglected in scientific research. In this paper, we propose a solution for the situation when the watermarked image has been captured in a wide angle and the depth of focus of the camera is not deep enough to capture the whole scene in-focus resulting in unfocused areas. The solution proposed here relies on a subfield of computational photography, namely all-in-focus imaging. All-in-focus images are generated by fusing multiple images from the same scene with different focus distances together, so that the object being photographed is fully in focus. Traditionally, the images to be fused are selected by hand from the focal stack or the whole stack is used for building the all-in-focus image. In mobile phone applications, computational resources are limited and using the full focal stack would result in long processing times and the manual selection of images would not be practical. In addition, we propose a method for optimizing the size of the focal stack and automatically selecting appropriate images for fusion. It is shown here that a watermark can still be recovered from the reconstructed all-in-focus image accurately.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.