Intelligent Agents act in open and thus risky environments, hence making the appropriate decision about who to trust in order to interact with, could be a challenging process.As intelligent agents are gradually enriched with Semantic Web technology, acting on behalf of their users with limited or no human intervention, their ability to perform assigned tasks is scrutinized. Hence, trust and reputation models, based on interaction trust or witness reputation, have been proposed, yet they often presuppose the use of a centralized authority. Although such mechanisms are more popular, they are usually faced with skepticism, since users may question the trustworthiness and the robustness of a central authority. Distributed models, on the other hand, are more complex but they provide personalized estimations based on each agent's interests and preferences. To this end, this article proposes DISARM, a novel distributed reputation model. DISARM deals MASs as social networks, enabling agents to establish and maintain relationships, limiting the disadvantages of the common distributed approaches. Additionally, it is based on defeasible logic, modeling the way intelligent agents, like humans, draw reasonable conclusions from incomplete and possibly conflicting (thus inconclusive) information. Finally, we provide an evaluation that illustrates the usability of the proposed model.