Although Handwritten Mathematical Expression Recognition (HMER) tasks are a subset of the Optical Character Recognition (OCR) process, both of them have remained predominantly exclusive to each other. Owing to the widespread increase and progress in Artificial Intelligence and Machine Learning, both fields have benefited greatly, with state-of-the-art architectures being used. This domain is further accentuated by the fact that conversion of math-based handwritten literature to a digital format is extremely time-consuming and is often a manual procedure. HMER in particular, has remained a challenging problem ever since its realization, with several worldwide competitions such as the Competition on Recognition of Online Handwritten Mathematical Expressions (CROHME) showcasing the fact that progress in this field is extremely slow. This is attributed to the fact that the Mathematical 'language' has an enormous amount of variables and spatial dependencies. To this end, the authors would like to present Doc2Tex, a novel approach in converting handwritten math-based documents to a digital format, with an expression rate (0-error) of up to 64.58%. The proposed approach aims to solve this issue by making use of two aspects: Implementing a model which leverages the Transformer architecture for both image understanding and wordpiece-level text generation (fine-tuned and vanilla), and using a suitable probability indicator to differentiate between mathematical expressions and the English language. The approach put forth in this paper encompasses a wide variety of mathematical symbols and expressions in its vocabulary including complex symbols such as those from calculus and statistical probability domains. The inclusion of a wide variety of complex symbols in the vocabulary has not been done in most of the previous literary works examined by the authors. Metrics of the approach presented in this paper compared to the meta in HMER and OCR tasks have been provided.