2022
DOI: 10.33640/2405-609x.3241
|View full text |Cite
|
Sign up to set email alerts
|

Improving Prediction of Arabic Fake News Using Fuzzy Logic and Modified Random Forest Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…The embedding process translates semantic meaning into geometric meaning, Word2Vec [8,23,25], Global vectors (GloVe) for word representation [18], n-grams [11,26], word-embedding [12], BOW [7] are the pioneers of word representativeness approaches. Classification Step: Different methods of machine learning and deep learning are used by different researchers, and machine learning algorithms, such as SVM [3,4,7,9,11,14,15,18,21,24,26,[37][38][39], Decision tree [1,4,13,26], logistic regression [15,20,25], Naïve Bayes [9,40], random forest [25,41,42], XGBoost [25]) were applied. Recently, deep learning methods are used like CNN [12,18,43], a hybrid of CNN-LSTM [25], and LRCN [12].…”
Section: Methodsmentioning
confidence: 99%
“…The embedding process translates semantic meaning into geometric meaning, Word2Vec [8,23,25], Global vectors (GloVe) for word representation [18], n-grams [11,26], word-embedding [12], BOW [7] are the pioneers of word representativeness approaches. Classification Step: Different methods of machine learning and deep learning are used by different researchers, and machine learning algorithms, such as SVM [3,4,7,9,11,14,15,18,21,24,26,[37][38][39], Decision tree [1,4,13,26], logistic regression [15,20,25], Naïve Bayes [9,40], random forest [25,41,42], XGBoost [25]) were applied. Recently, deep learning methods are used like CNN [12,18,43], a hybrid of CNN-LSTM [25], and LRCN [12].…”
Section: Methodsmentioning
confidence: 99%
“…Tokenization describes interpreting and grouping isolated tokens to generate higher-level tokens in addition to dividing strings into fundamental processing units. Word tokenization includes preprocessed raw texts are divided into textual units 18,19 . During the dataset cleaning stage, the columns from the datasets that weren't needed for processing were removed.…”
Section: Data Pre-processingmentioning
confidence: 99%
“…In eq. ( 2), TF (w) signifies the frequency of the word w in the document, count (w) and count (wn) denote the number of samples including the word w in the dataset, and n denotes the number of samples containing the word w in a corpus, respectively [29]. IDF (w) denotes the inverse file frequency of the word w in equation (3).…”
Section: Feature Extractionmentioning
confidence: 99%