Soft spelling mistakes are a class of mistakes that is widespread among native Arabic speakers and foreign learners alike. Some of these mistakes are typographical in nature. They occur due to orthographic variations of some Arabic letters and the complex rules that dictate their correct usage. Many people forgo these rules, and given the identical phonetic sounds, they often confuse such letters. In this paper, we investigate how to use machine learning to correct such mistakes given that there are no sufficient datasets to train the correction models. Soft errors detection and correction is an active field in Arabic natural language processing. We generate training datasets using proposed transformed input approach and stochastic error injection approach. These approaches are applied to two acclaimed datasets that represent Classical Arabic and Modern Standard Arabic. We treat the problem as character-level, one-to-one sequence transcription problem. This one-to-one transcription of mistakes that include omissions and deletions is possible with adopted simple transformations. This approach permits using bidirectional long short-term memory (BiLSTM) models that are more effective to train compared to other alternatives such as encoder-decoder models. Based on investigating multiple alternatives, we recommend a configuration that has two BiLSTM layers, and is trained using the stochastic error injection approach with error injection rate of 40%. The best model corrects 96.4% of the injected errors and achieves a low character error rate of 1.28% on a real test set of soft spelling mistakes