“…Some examples of such widely known terms used to threaten are "Blood, kill, murder, death, and stab". To detect threatening language, the computational linguistics community has been focusing on online platforms like YouTube, Twitter, [25] English BoW, char n-grams SVM, LR, CNN YouTube [26] English BoW, word n-grams (2, 3, 5) SVM, NB YouTube [15] English BoW, GloVe, fastText 1D-CNN, LSTM, and BiLSTM Twitter [27] English unigram SVM, CNN, BiLSTM Twitter [28] English word n-grams (1−8) SVM Twitter [6] English Latent Dirichlet Allocation (LDA) LR Online Comments [1] English word n-grams, char n-grams NB, SVM Twitter [29] English word n-gram (3−8), char n-grams (1−3) CNN, RNN, RF, NB, SVM Twitter [30] English word n-grams SVM (linear, polynomial, radial) Twitter, Articles [31] English abusive and non-abusive word list k-means Twitter, Blogs [32] English, Portuguese hateword2vec, hatedoc2vec, unigrams NB, SVM YouTube [22] Arabic word n-gram SVM Twitter [33] Spanish word n-grams, char n-grams LR Twitter [34] Indonesian Latent Dirichlet Allocation (LDA) -Twitter, Facebook, Reddit [35] Danish, English char n-grams LR, BiLSTM Twitter [17] German Wikipedia embedding CNN Twitter [19] Italian BERT tokens AlBERTo Blogs [36] Japanese word n-grams (1-5) SVM Facebook [37] Bengla word n-grams (1-3) MNB, SVM, CNN, LSTM Twitter, Instagram [37] Turkish -MNB, SVM, DT (C4.5), KNN Facebook, Instagram, and blogs [2, 22, 24-28, 31, 36, 38-41].…”