While there are many grammar checkers available for various languages, especially the English language, those that exist for the lowresource Filipino language can only effectively correct lexical errors. There is yet to be a publicly available Filipino grammar checker that can also address semantic errors, which are more complex. As such, this study found an opportunity to introduce Balarila, a deep learning-based Filipino GEC model inspired by the GECToR approach. To address the absence of a training and test dataset, an automated error generation pipeline was devised, creating synthetic datasets of error-free and error-filled Filipino sentences sourced from various online news sources. Tagalog BERT and RoBERTa models were fine-tuned in two stages using this generated corpus. Evaluation metrics included precision, recall, and F 0.5 scores for GEC, and a multi-class confusion matrix for GED. The top-performing model, RoBERTa Tagalog Large, achieved an F 0.5 score of 70.75, while the RoBERTa Tagalog Base, with a F 0.5 score of 69.00, demonstrated cost-effectiveness in training. The created datasets can also be used as a benchmark for Filipino grammar checker models.