Reputation plays a key role in online marketplace communities improving trust among community members. Reputation works as a decision-making tool for understanding the behavior of the business partners. Success of any online business depends on the trust the business agents share with each other. However, untrustworthy agents have anno place in online marketplaces and are forced to leave the market even if they will potentially cooperate. In this study, we propose an exploration strategy based on a forgiveness mechanism for untrustworthy agents to recover their reputation. Furthermore, a number of experiments based on the NetLogo simulation are performed to validate the applicability of the proposed mechanism. The results show that the online marketplaces incorporating a forgiveness mechanism can be used with the existing reputation systems and improve the efficiency of online marketplaces.
Abstract:The violation of trust as a result of interactions that do not proceed as expected gives rise to the question as to whether broken trust can possibly be recovered. Clearly, trust recovery is more complex than trust initialization and maintenance. Trust recovery requires a more complex mechanism to explore different factors that cause the decline of trust and identify the affected individuals of trust violation both directly and indirectly. In this study, an extended framework for recovering trust is presented. Aside from evaluating whether there is potential for recovery based on the outcome of a forgiveness mechanism after a trust violation, encouraging cooperation between interacting parties after a trust violation through incentive mechanisms is also important. Furthermore, a number of experiments are conducted to validate the applicability of the framework and the findings show that the e-marketplace incorporating our proposed framework results in improved efficiency of trading, especially in long-term interactions.
Trust violation during cooperation of autonomous agents in multiagent systems is usually unavoidable and can arise due to a wide number of reasons. From a psychological point of view, the violation of an agent’s trust is a result of one agent (which is a transgressor) expressing a very low weight on the welfare of another agent (which is a victim) by inflicting a high cost for a very small benefit. In order for the victim to make an effective decision about whether to cooperate or punish for the next interaction, a psychological variable called welfare tradeoff ratio (WTR) can be used to upregulate the transgressor’s disposition so that the number of exploitive behaviors that are likely to happen in the future will be decreased. In this paper, we propose computational models of metrics based on the welfare tradeoff ratio along with the way by which multiple metrics can be integrated to provide the final result. Additionally, a number of experiments based on social network analysis are conducted to evaluate the performance of the proposed framework and the results show that by implementing WTR the simulated network is able to deal with different levels of trust violation effectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.