Image captioning is the task of automatically generating a description of an image. Traditional image captioning models tend to generate a sentence describing the most conspicuous objects, but fail to describe a desired region or object as human. In order to generate sentences based on a given target, understanding the relationships between particular objects and describing them accurately is central to this task. In detail, information-augmented embedding is used to add prior information to each object, and a new Multi-Relational Weighted Graph Convolutional Network (MR-WGCN) is designed for fusing the information of adjacent objects. Then, a dynamic attention decoder module selectively focuses on particular objects or semantic contents. Finally, the model is optimized by similarity loss. The experiment on MSCOCO Entities demonstrates that IANR obtains, to date, the best published CIDEr performance of 124.52% on the Karpathy test split. Extensive experiments and ablations on both the MSCOCO Entities and the Flickr30k Entities demonstrate the effectiveness of each module. Meanwhile, IANR achieves better accuracy and controllability than the state-of-the-art models under the widely used evaluation metric.
In view of the scene’s complexity and diversity in scene classification, this paper makes full use of the contextual semantic relationships between the objects to describe the visual attention regions of the scenes and combines with the deep convolution neural networks, so that a scene classification model using visual attention and deep networks is constructed. Firstly, the visual attention regions in the scene image are marked by using the context-based saliency detection algorithm. Then, the original image and the visual attention region detection image are superimposed to obtain a visual attention region enhancement image. Furthermore, the deep convolution features of the original image, the visual attention region detection image, and the visual attention region enhancement image are extracted by using the deep convolution neural networks pretrained on the large-scale scene image dataset Places. Finally, the deep visual attention features are constructed by using the multilayer deep convolution features of the deep convolution networks, and a classification model is constructed. In order to verify the effectiveness of the proposed model, the experiments are carried out on four standard scene datasets LabelMe, UIUC-Sports, Scene-15, and MIT67. The results show that the proposed model improves the performance of the classification well and has good adaptability.
Automatically describing the content of an image is a challenging task that is on the edge between natural language and computer vision. The current image caption models can describe the objects that are frequently seen in the training set very well, but they fail to describe the novel objects that are rarely seen or never seen in the training set. Despite describing novel objects being important for practical applications, only a few works investigate this issue. Furthermore, those works only investigate rarely seen objects, but ignore the never-seen objects. Meanwhile, the number of never-seen objects is more than the number of frequently seen and rarely seen objects. In this paper, we propose two blocks that incorporate external knowledge into the captioning model to solve this issue. Initially, in the encoding phase, the Semi-Fixed Word Embedding block is an improvement for the word embedding layer that enables the captioning model to understand the meaning of the arbitrary visual words rather than a fixed number of words. Furthermore, the Candidate Sentences Selection block chooses candidate sentences by semantic matching rather than probability, avoiding the influence of never-seen words. In experiments, we qualitatively analyze the proposed blocks and quantitatively evaluate several captioning models with the proposed blocks on the Nocaps dataset. The experimental results show the effectiveness of the proposed blocks for novel objects, especially when describing never-seen objects, CIDEr and SPICE improved by 13.1% and 12.0%, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.