Radiology report generation is a critical task for radiologists, and automating the process can significantly simplify their workload. However, creating accurate and reliable radiology reports requires radiologists to have sufficient experience and time to review medical images. Unfortunately, many radiology reports end with ambiguous conclusions, resulting in additional testing and diagnostic procedures for patients. To address this, we proposed an encoder-decoder-based deep learning framework that utilizes chest X-ray images to produce diagnostic radiology reports. In our study, we have introduced a novel text modelling and visual feature extraction strategy as part of our proposed encoder-decoder-based deep learning framework. Our approach aims to extract essential visual and textual information from chest X-ray images to generate more accurate and reliable radiology reports. Additionally, we have developed a dynamic web portal that accepts chest X-rays as input and generates a radiology report as output. We conducted an extensive analysis of our model and compared its performance with other state-of-the-art deep learning approaches. Our findings indicate significant improvement achieved by our proposed model compared to existing models, as evidenced by the higher BLEU scores (BLEU1 = 0.588, BLEU2 = 0.4325, BLEU3 = 0.4017, BLEU4 = 0.3860) attained on the Indiana University Dataset. These results underscore the potential of our deep learning framework to enhance the accuracy and reliability of radiology reports, leading to more efficient and effective medical treatment.