Photometric stereo is a fundamental technique in computer vision known to produce 3D shape with high accuracy. It uses several input images of a static scene taken from one and the same camera position but under varying illumination. The vast majority of studies in this 3D reconstruction method assume orthographic projection for the camera model. In addition, they mainly use the Lambertian reflectance model as the way that light scatters at surfaces. Thus, providing reliable photometric stereo results from real world objects still remains a challenging task. We address 3D reconstruction by use of a more realistic set of assumptions, combining for the first time the complete Blinn-Phong reflectance model and perspective projection. Furthermore, we compare two different methods of incorporating the perspective projection into our model. Experiments are performed on both synthetic and real world images; the latter do not benefit from laboratory conditions. The results show the high potential of our method even for complex real world applications such as medical endoscopy images which may include many specular highlights.
With recent innovations in dense image captioning, it is now possible to describe every object of the scene with a caption while objects are determined by bounding boxes. However, interpretation of such an output is not trivial due to the existence of many overlapping bounding boxes. Furthermore, in current captioning frameworks, the user is not able to involve personal preferences to exclude out of interest areas. In this paper, we propose a novel hybrid deep learning architecture for interactive region segmentation and captioning where the user is able to specify an arbitrary region of the image that should be processed. To this end, a dedicated Fully Convolutional Network (FCN) named Lyncean FCN (LFCN) is trained using our special training data to isolate the User Intention Region (UIR) as the output of an efficient segmentation. In parallel, a dense image captioning model is utilized to provide a wide variety of captions for that region. Then, the UIR will be explained with the caption of the best match bounding box. To the best of our knowledge, this is the first work that provides such a comprehensive output. Our experiments show the superiority of the proposed approach over state-of-the-art interactive segmentation methods on several well-known datasets. In addition, replacement of the bounding boxes with the result of the interactive segmentation leads to a better understanding of the dense image captioning output as well as accuracy enhancement for the object detection in terms of Intersection over Union (IoU).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.