Radiology reports cover different aspects from radiological observation to the diagnosis of an imaging examination, such as x‐rays, magnetic resonance imaging, and computed tomography scans. Abundant patient information presented in radiology reports poses a few major challenges. First, radiology reports follow a free‐text reporting format, which causes the loss of a large amount of information in unstructured text. Second, the extraction of important features from these reports is a huge bottleneck for machine learning models. These challenges are important, particularly the extraction of key features such as symptoms, comparison/priors, technique, finding, and impression because they facilitate the decision‐making on patients' health. To alleviate this issue, a novel architecture CCheXR‐Attention is proposed to extract the clinical features from the radiological reports and classify each report into normal and abnormal categories based on the extracted information. We have proposed a modified Mogrifier long short‐term memory model and integrated a multihead attention method to extract the more relevant features. Experimental outcomes on two benchmark datasets demonstrated that the proposed model surpassed state‐of‐the‐art models.