Misinformation in social media is an actual and contested policy problem given its outreach and the variety of stakeholders involved. In particular, increased social media use makes the spread of misinformation almost universal. Here we demonstrate a framework for evaluating tools for detecting misinformation using a preference elicitation approach, as well as an integrated decision analytic process for evaluating desirable features of systems for combatting misinformation. The framework was tested in three countries (Austria, Greece, and Sweden) with three groups of stakeholders (policymakers, journalists, and citizens). Multi-criteria decision analysis was the methodological basis for the research. The results showed that participants prioritised information regarding the actors behind the distribution of misinformation and tracing the life cycle of misinformative posts. Another important criterion was whether someone intended to delude others, which shows a preference for trust, accountability, and quality in, for instance, journalism. Also, how misinformation travels is important. However, all criteria that involved active contributions to dealing with misinformation were ranked low in importance, which shows that participants may not have felt personally involved enough in the subject or situation. The results also show differences in preferences for tools that are influenced by cultural background and that might be considered in the further development of tools.