This study assessed the use of pre-trained language models for classifying cancer types as lung (class1), esophageal (class2), and other cancer (class0) in radiology reports. We compared BERT, a general-purpose model, with ClinicalBERT, a clinical domain-specific model. The models were trained on radiology reports from our hospital and validated on a hold-out set from the same hospital and a public dataset (MIMIC-III). We used 4064 hospital radiology reports: 3902 for training (which were further divided into a 70:30 random train–test split) and 162 as a hold-out set. 542 reports from MIMIC-III were used for independent external validation. The ground-truth labels were generated by two expert radiologists independently. The F1 score for the classes 0, 1, and 2 on internal validation were 0.62, 0.87, and 0.90 for BERT, and 0.93, 0.97, and 0.97 for ClinicalBERT respectively. External validation F1 score for the classes 0, 1, and 2 were 0.66, 0.37, and 0.46 and for BERT, and 0.68, 0.50, and 0.64 for ClinicalBERT respectively. ClinicalBERT outperformed BERT demonstrating the benefit of domain-specific pre-training for this task. The higher accuracy for lung cancer might be due to imbalanced data with more lung cancer reports.