In today's world, with the dramatic increase of digital data and the variety of information, image recovery plays a very vital role in people's lives. For example, imagine a person in an online store looking for a dress similar to a friend's dress, but with distinct criteria such as blue flowers and red flags. The main challenge in this field is the existence of a semantic gap between the user's intention and the machine's understanding. The goal of image retrieval using multivariate (image-text) queries is to reduce this gap. An effective retrieval searches for images matching specific variations specified in a text expression related to the image. Existing methods use Convolutional Neural Networks (CNNs) and recurrent neural networks (LSTM) to extract image and text features that do not express the semantic relationships between the image and text well. In this regard, in order to improve these profiles, a transformer-based architecture is presented. In this research, Vision Transformer (ViT) is used to extract image features and Bidirectional Encoder (BERT) is used to extract text features. After combining the image and text features and applying the attention mechanism on images and text to enhance visual and text understanding. The proposed model of this research is inspired by deep metric learning and seeks to reduce the distance between the image and the input text with the searched images and the user's intent. In addition, a symmetric constraint has been used to improve the model indices. The evaluation results show that the proposed approach from the base model provides better performance in multimodal query-based image retrieval. To reproduce the results of this research, our source code is available at: https://github.com/smb-h/mqirtn