Image retargeting is the task of making images capable of being displayed on screens with different sizes. This work should be done so that high-level visual information and low-level features such as texture remain as intact as possible to the human visual system, while the output image may have different dimensions. Thus, simple methods such as scaling and cropping are not adequate for this purpose. In recent years, researchers have tried to improve the existing retargeting methods, and they have introduced new ones. However, a specific method cannot be utilized to retarget all types of images. In other words, different images require different retargeting methods. Image retargeting has a close relationship to image saliency detection, which is relatively a new image processing task. Earlier saliency detection methods were based on local and global but low-level image information. These methods are called bottom-up processes. On the other hand, newer approaches are top-down and mixed methods that consider the high level and semantic knowledge of the image too. In this paper, we introduce the proposed methods in both saliency detection and retargeting. For the saliency detection, the use of image context and semantic segmentation are examined, and a novel mixed bottom-up, and top-down saliency detection method is introduced. After saliency detection, a modified version of an existing retargeting technique is utilized for retargeting the images. The results suggest that the proposed image retargeting pipeline has excellent performance compared to other tested methods. Also, the subjective evaluations on the Pascal dataset can be used as a retargeting quality assessment dataset for further research.
S aliency map detection, as a method for detecting important regions of an image, is used in many applications such as image classification and recognition. We propose that context detection could have an essential role in image saliency detection. This requires extraction of high level features. In this paper a saliency map is proposed, based on image context detection using semantic segmentation as a high level feature. S aliency map from semantic information is fused with color and contrast based saliency maps. The final saliency map is then generated. S imulation results for Pascal-voc11 image dataset show 99% accuracy in context detection. Also final saliency map produced by our proposed method shows acceptable results in detecting salient points.
In the recent years, public use of artistic effects for editing and beautifying images has encouraged researchers to look for new approaches to this task. Most of the existing methods apply artistic effects to the whole image. Exploitation of neural network vision technologies like object detection and semantic segmentation could be a new viewpoint in this area. In this paper, we utilize an instance segmentation neural network to obtain a class mask for separately filtering the background and foreground of an image. We implement a top prior-mask selection to let us select an object class for filtering purpose. Different artistic effects are used in the filtering process to meet the requirements of a vast variety of users. Also, our method is flexible enough to allow the addition of new filters. We use pretrained Mask R-CNN instance segmentation on the COCO dataset as the segmentation network. Experimental results on the use of different filters are performed. System's output results show that this novel approach can create satisfying artistic images with fast operation and simple interface.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.