2015
DOI: 10.1007/s00521-015-2069-7
|View full text |Cite
|
Sign up to set email alerts
|

Supervised classification of spam emails with natural language stylometry

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 29 publications
0
12
0
Order By: Relevance
“…The combination of boosting and SVM outperformed single classifiers on several benchmark datasets in [70]. Similarly, boosting and bagging were reported to perform significantly better than NB and SVM in a stylometric spam filter [63]. Laorden et al [46] proposed an anomaly-based spam-filtering system that uses a data reduction algorithm on the labelled dataset, reducing processing time while maintaining high detection rates.…”
Section: Spam Filtering Using Machine Learning: a Literature Reviewmentioning
confidence: 99%
“…The combination of boosting and SVM outperformed single classifiers on several benchmark datasets in [70]. Similarly, boosting and bagging were reported to perform significantly better than NB and SVM in a stylometric spam filter [63]. Laorden et al [46] proposed an anomaly-based spam-filtering system that uses a data reduction algorithm on the labelled dataset, reducing processing time while maintaining high detection rates.…”
Section: Spam Filtering Using Machine Learning: a Literature Reviewmentioning
confidence: 99%
“…The average classification accuracy of the proposed method was 95.15%. Shams and Mercer [47 ] proposed a filter by using natural language attributes. The accuracy of the filter applied to Enron data sets could reach 97.746%.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…Moreover, in the problem of natural language processing, it is also natural to leverage the problems of part-of-speech (POS) tagging and noun chuck prediction, since a word with a POS of a noun usually appears in a noun chunk (Collobert and Weston, 2008;Herath, Ikeda, Ishizaki, Anzai and Aiso, 1992;Jabbar, Iqbal, Akhunzada and Abbas, 2018;Lyon and Frank, 1997;Shams and Mercer, 2016). Multi-task learning aims to build a joint model for multiple tasks from the same input data.…”
Section: Backgroundsmentioning
confidence: 99%