A Novel data fusion technique to support text-based and content-based image retrieval combining different heterogeneous features is proposed. The user need to give just a single click on an query image and images recovered by content based search are re-positioned dependent on their visual and texture similitudes to the query image.Textual and visual expansions are integrated to capture user intention without additional human feedback. Expanded keywords helps in extending positive model images and furthermore develop the image pool to include more relevant images. A lot of visual features which are both efficient and effective for image search are chosen. The n-dimensional feature vector for both colour and texture is reduced to single dimension each, used for comparing the similarity with query image using suitable distance metrics. Further only the images retrieved as a result of text based search and image re-ranking process are compared during run time for finding the similar images; not the entire database. This considerably reduces the computational complexity and improves the search efficiency. With improved feature extraction capturing textual and visual similarities, the proposed one click image search framework gives a productive robotized recovery of comparable images giving promising results with improvement in retrieval efficiency.