“…It has promising performance in unseen NLP tasks ( Papadopoulos, Panagakis, Koubarakis, & Nicolaou, 2022 ), in medical reports processing ( Donnelly, Grzeszczuk, & Guimaraes, 2022 ). Also in text summarizing ( Patil, Rao, Reddy, Ram, & Meena, 2022 ), audio captioning ( Liu, Mei et al, 2022 ), and analysis of natural language ( Guven & Unalir, 2022 ) the BERT annotation has promising performance. Besides these uses there are three reasons why BERT is likely to be a game-changer in NLP as this is a bidirectional model which combines Mask Language Model (MLM) and Next Sentence Prediction (NSP) to understand the context-heavy text.…”