2011
DOI: 10.1145/2019591.2019593
|View full text |Cite
|
Sign up to set email alerts
|

Effective Usage of Computational Trust Models in Rational Environments

Abstract: Computational reputation-based trust models using statistical learning have been intensively studied for distributed systems where peers behave maliciously. However practical applications of such models in environments with both malicious and rational behaviors are still very little understood. In this article, we study the relation between their accuracy measures and their ability to enforce cooperation among participants and discourage selfish behaviors. We provide theoretical results that show the condition… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 30 publications
0
9
0
Order By: Relevance
“…The misclassification error bounds of these reputation‐based probabilistic trust models are well lower than 0.5 even under various adaptively malicious attacks by participating raters. Other empirical experimental results (Vu and Aberer, ) have confirmed that other computational trust models, such as those proposed by Xiong and Liu (), are also capable of classifying unreliable ratings with a small error bound ϵ under various attack scenarios, and thus, they can be readily used to implement a dishonesty detector.…”
Section: Implementation Issuesmentioning
confidence: 69%
See 3 more Smart Citations
“…The misclassification error bounds of these reputation‐based probabilistic trust models are well lower than 0.5 even under various adaptively malicious attacks by participating raters. Other empirical experimental results (Vu and Aberer, ) have confirmed that other computational trust models, such as those proposed by Xiong and Liu (), are also capable of classifying unreliable ratings with a small error bound ϵ under various attack scenarios, and thus, they can be readily used to implement a dishonesty detector.…”
Section: Implementation Issuesmentioning
confidence: 69%
“…In consequence, this may change the incentives for a provider to accept our identity premium mechanism. We have shown in our previous work (Vu and Aberer, ) that honest negative ratings due to unavoidable circumstances have only minor impact on providers.…”
Section: Solution Frameworkmentioning
confidence: 76%
See 2 more Smart Citations
“…Reputation systems can be centralized or distributed [25,26,27,28]. Liu [29] describes criteria for classifying and analysing centralised reputation systems.…”
Section: Trust and Reputationmentioning
confidence: 99%