This study presents a comparative analysis of transformer models for text classification, utilizing a hybrid approach that integrates rule-based regular expressions with fine-tuned neural network models. Initially, regular expressions are employed to annotate sentences in a cost-effective manner, providing an efficient alternative to manual labeling. The annotated dataset, comprising around 33,000 instances across three classes (Reminder, Scheduled Activity, General) and also restructured into two classes (Reminder and General) by merging “Scheduled Activity” with “Reminder”, is then used to fine-tune various transformer models, including DistilBERT, BERT, RoBERTa, ALBERT, Electra, Ernie 2.0, XLNet, and GPT-2. Our methodology involves freezing all layers except the final one during fine-tuning, allowing the models to learn nuanced linguistic patterns while mitigating overfitting. Results reveal that DistilBERT, despite its smaller size (66 million parameters), outperforms larger models such as BERT and GPT-2 in terms of accuracy, precision, recall, and F1-score. Likewise, we demonstrated that the proposed method can work better than generative AI large language models for both zero-shot and one-shot learning, namely GPT-3.5, GPT-4, GPT-4o and LLaMA-3 70B. This efficiency is attributed to the distillation process that retains essential features while reducing computational demands. Notably, DistilBERT achieved an overall accuracy of 0.86, significantly surpassing BERT’s 0.55, GPT-2’s 0.36, XLNet’s 0.51, Ernie 2.0’s 0.72, Electra’s 0.74, ALBERT’s 0.72, and RoBERTa’s 0.71. The study highlights the importance of model size and architecture in achieving optimal performance, especially in resource-constrained scenarios. This investigation underscores the efficacy of combining rule-based methods with advanced transformer models for text annotation, demonstrating that a balanced approach leveraging both handcrafted rules and learned representations can generalize better than relying solely on one technique. The proposed hybrid method offers a robust and adaptable solution for sentence annotation pipelines, enhancing performance in diverse natural language processing applications with limited labeled data. Code is available at https://github.com/arafet/Text-annotation-using-rule-based-method-and-Transformers .