Common spelling checks in the current digital era have trouble reading languages like Bengali, which employ English letters differently. In response, we have created a better BERT-based spell checker that makes use of a CNN sub-model (Semantic Network). Our novelty, which we term progressive stacking, concentrates on improving BERT model training while expediting the corrective process. We discovered that, when comparing shallow and deep versions, deeper models could require less training time. There is potential for improving spelling corrections with this technique. We categorized and utilized as a test set a 6300-word dataset that Nayadiganta Mohiuddin supplied, some of which had spelling errors. The most popular terms were the same as those found in the Prothom-Alo artificial error dataset.