Visual Question Answering (VQA) is a multi-modal AI-complete task of answering natural language questions about images. Literature solved VQA with a three-phase pipeline: image and question featurisation, multi-modal feature fusion and answer generation or prediction. Most of the works have given attention to the second phase, where multi-modal features get combined ignoring the effect of individual input features. This work investigates VQA’s natural language question embedding phase by proposing a new question featurisation framework based on Supervised Term Weighting (STW) schemes. In addition, two new STW schemes integrating text semantics, qf.cos and tf.rf.sim, have been introduced to boost the framework’s performance. A series of tests on the DAQUAR VQA dataset is used to compare the new system to conventional pre-trained word embedding. Over the past few years, STW schemes have been commonly used in text classification research. In light of this, tests are carried out to verify the effectiveness of the two newly proposed STW schemes in the general text classification task.
Visual question answering (VQA) demands a meticulous and concurrent proficiency in image interpretation and natural language understanding to correctly answer the question about an image. The existing VQA solutions either focus only on improving the joint multi-modal embedding or on the fine-tuning of visual understanding through attention. This research, in contrast to the current trend, investigates the feasibility of an object-assisted language understanding strategy titled semantic object ranking (SOR) framework for VQA. The proposed system refines the natural language question representation with the help of detected visual objects. For multi-CNN image representation, the system employs canonical correlation analysis (CCA). The suggested model is assessed using accuracy and WUPS measures on the DAQUAR dataset. On the DAQUAR dataset, the analytical outcomes reveal that the presented system outperforms the prior state-of-the-art by a significant factor. In addition to the quantitative analysis, proper illustrations are supplied to observe the reasons for performance improvement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.