Often in open multiagent systems, agents interact with other agents to meet their own goals. Trust is, therefore, considered essential to make such interactions effective. However, trust is a complex, multifaceted concept and includes more than just evaluating others' honesty. Many trust evaluation models have been proposed and implemented in different areas; most of them focused on algorithms for trusters to model the trustworthiness of trustees in order to make effective decisions about which trustees to select. For this purpose, many trust evaluation models use third party information sources such as witnesses, but slight consideration is paid for locating such third party information sources. Unlike most trust models, the proposed model defines a scalable way to locate a set of witnesses, and combines a suspension technique with reinforcement learning to improve the model responses to dynamic changes in the system. Simulation results indicate that the proposed model benefits trusters while demanding less message overhead.
Multiagent systems (MASs) are increasingly popular for modeling distributed environments that are highly complex and dynamic, such as e‐commerce, smart buildings, and smart grids. Typically, agents assumed to be goal driven with limited abilities, which restrains them to working with other agents for accomplishing complex tasks. Trust is considered significant in MASs to make interactions effectively, especially when agents cannot assure that potential partners share the same core beliefs about the system or make accurate statements regarding their competencies and abilities. Due to the imprecise and dynamic nature of trust in MASs, we propose a hybrid trust model that uses fuzzy logic and Q‐learning for trust modeling. as an improvement over Q‐learning‐based trust evaluation. Q‐learning is used to estimate trust on the long term, fuzzy inferences are used to aggregate different trust factors, and suspension is used as a short‐term response to dynamic changes. The performance of the proposed model is evaluated using simulation. Simulation results indicate that the proposed model can help agents select trustworthy partners to interact with. It has a better performance compared to some of the popular trust models in the presence of misbehaving interaction partners.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.