In the modern era, image captioning has become
one of the most widely required tools. Moreover, there are inbuilt
applications that generate and provide a caption for a certain
image, all these things are done with the help of deep neural
network models. The process of generating a description of an
image is called image captioning. It requires recognizing the
important objects, their attributes, and the relationships among
the objects in an image. It generates syntactically and
semantically correct sentences.In this paper, we present a deep
learning model to describe images and generate captions using
computer vision and machine translation. This paper aims to
detect different objects found in an image, recognize the
relationships between those objects and generate captions. The
dataset used is Flickr8k and the programming language used
was Python3, and an ML technique called Transfer Learning
will be implemented with the help of the Xception model, to
demonstrate the proposed experiment. This paper will also
elaborate on the functions and structure of the various Neural
networks involved. Generating image captions is an important
aspect of Computer Vision and Natural language processing.
Image caption generators can find applications in Image
segmentation as used by Facebook and Google Photos, and even
more so, its use can be extended to video frames. They will easily
automate the job of a person who has to interpret images. Not to
mention it has immense scope in helping visually impaired
people.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.