2018
DOI: 10.1007/978-3-319-92007-8_4
|View full text |Cite
|
Sign up to set email alerts
|

Spam Filtering in Social Networks Using Regularized Deep Neural Networks with Ensemble Learning

Abstract: Spam filtering in social networks is increasingly important owing to the rapid growth of social network user base. Sophisticated spam filters must be developed to deal with this complex problem. Traditional machine learning approaches such as neural networks, support vector machine and Naïve Bayes classifiers are not effective enough to process and utilize complex features present in high-dimensional data on social network spam. To overcome this problem, here we propose a novel approach to social network spam … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 21 publications
(42 reference statements)
0
15
0
Order By: Relevance
“…It was also used as a baseline method in recent studies [20]. In this study, SVM was trained using the SMO algorithm with varying complexity C = {2 0 , 2 1 , … , 2 6 } and polynomial kernel function;  Random Forest (RF) is another well-performing benchmark method used in several comparative studies [10,11]. Here we used it with 100 random trees.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…It was also used as a baseline method in recent studies [20]. In this study, SVM was trained using the SMO algorithm with varying complexity C = {2 0 , 2 1 , … , 2 6 } and polynomial kernel function;  Random Forest (RF) is another well-performing benchmark method used in several comparative studies [10,11]. Here we used it with 100 random trees.…”
Section: Resultsmentioning
confidence: 99%
“…Note that only most relevant terms (attributes) were selected according to their weights vij. In agreement with previous studies [10,11], top 2000 terms were retained, including unigrams and bigrams as suggested in [26]. To obtain word embeddings, the skip-gram model was employed.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…A. Barushka and Petr Hajek (2018) in [27] depicted "ensemble learning algorithms with DNN" as the base learner is more accurate than "state-of-the-art spam filtering methods". From results we know that "bagging algorithm trained with DNNs" achieved high accuracy and best results on both classes.…”
Section: Literature Surveymentioning
confidence: 99%
“…This paper is a significantly extended version of [18]. The earlier version was limited to ensemble-based RDNNs without considering cost-sensitive learning and feature selection.…”
mentioning
confidence: 99%