Critical findings in radiology reports are life threatening conditions that need to be communicated promptly to physicians (“critical findings”) for timely man-agement of patients. Flagging radiology reports of such incidents could facilitate opportune communication of critical findings. With advancements in natural language processing (NLP), large language models (LLMs) can be trained with task-specific instructions and examples to mine information from narrative texts. We believe that similar methods can be applied to radiology reports to identify and extract critical findings from these reports. However, due to the rarity of such critical events, there is a dearth of manually labeled datasets of critical findings in radiology reports. To overcome this limitation, we train instruction-tuned MISTRAL-based language models in a two-phase weakly supervised fine-tuning setup on unlabeled radiology reports from Mayo Clinic (n=15000). The weakly fine-tuned model is then used to automatically extract critical terms from both internal and external test datasets - Mayo Clinic (n=80) and MIMIC-III (n=123) 1 respectively against the expert-annotation. We also evaluated performance of the models on a large-scale MIMIC-IV reports (n=5000) using automated LLM-aided evaluation metrics - G-eval and Prometheus. We observe that, for both manual and LLM-based evaluations, weakly supervised fine-tuning improves model performance, showing successful task-specific alignment. For community use, we released the trained model with open-source academic license∗