We consider a distributed multiuser system where individual entities possess observations or perceptions of one another, while the truth is only known to themselves and they might have an interest in withholding or distorting the truth. We ask the question whether it is possible for the system as a whole to arrive at the correct perceptions or assessment of all users, referred to as their reputation, by encouraging or incentivizing the users to participate in a collective effort without violating private information and self-interest. In this paper, we investigate this problem using a mechanism design theoretic approach. We introduce a number of utility models representing users' strategic behavior, each consisting of one or both of a truth element and an image element, reflecting the user's desire to obtain an accurate view of others and an inflated image of itself. For each model, we either design a mechanism that achieves the optimal performance (solution to the corresponding centralized problem), or present individually rational suboptimal solutions. In the latter case, we demonstrate that even when the centralized solution is not achievable, by using a simple punish-reward mechanism, not only does a user have the incentive to participate and provide information, but also that this information can improve the system performance.