Proceedings of the 22nd ACM International Conference on Information &Amp; Knowledge Management 2013
DOI: 10.1145/2505515.2505700
|View full text |Cite
|
Sign up to set email alerts
|

Uncovering collusive spammers in Chinese review websites

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
86
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 110 publications
(86 citation statements)
references
References 11 publications
0
86
0
Order By: Relevance
“…This indicates that the proposed features are superior to these two traditional features. Though the F1 − measure value of the proposed does not outperform that of collusive behavioralbased features, collusive behavioral-based features need to find the group of each user first, which makes it harder to calculate them than our proposed features [32]. All in all, the proposed features are able to detect attackers in the reallife situation, which proves the practical value of our proposed detection method.…”
Section: Practical Value Of the Proposed Detection Methodsmentioning
confidence: 80%
See 3 more Smart Citations
“…This indicates that the proposed features are superior to these two traditional features. Though the F1 − measure value of the proposed does not outperform that of collusive behavioralbased features, collusive behavioral-based features need to find the group of each user first, which makes it harder to calculate them than our proposed features [32]. All in all, the proposed features are able to detect attackers in the reallife situation, which proves the practical value of our proposed detection method.…”
Section: Practical Value Of the Proposed Detection Methodsmentioning
confidence: 80%
“…The MovieLens100K dataset consists of the ratings from 943 users on 1682 items, with a rating frequency not less than 20 for each item [4]. The Amazon review dataset is crawled from Amazon.cn till August 20, 2012, which contains 1205125 reviews written by 645072 reviewers on 136785 products [32]. Each review has 6 attributes: ReviewerID, ProductID, Product Brand, Rating, Date and Review Text.…”
Section: Experimental Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Compared to prior works, many challenges arise regarding this problem: (1) (Annotation Difficulty) Considering the fact that CQA spamming is usually a collaborative activity, it is difficult to ascertain which contents are deceptive and which ones are legitimate; (2) (Asymmetric Q&A Attributes) In CQA, the questions and answers are asymmetric with different attributes and linguistic structures, which are different from deceptive product reviews [9,10,23] or promotional microblog posts [13] that can be analyzed uniformly; (3) (Unclear Group Base) Previous works group the spamming activities that review multiple common products in the review platforms [19,28,29] and post common URLs or contents in the microblog environments [4]. However, in CQA, there are not any clear existing connections that can group Q&As, because CWers can generate unlimited distinct questions, and the deceptive answers can respond to any of them.…”
Section: Introductionmentioning
confidence: 99%