The integration of Artificial Intelligence techniques into Decision Support Systems yields effective solutions to decision problems, especially when complex scenarios are at hand. However, the use of intelligent black-box models can hinder the decision support system's potential to be fully adopted because opaque processes raise suspicions and doubts among careful decision makers. Moreover, appropriate and comprehensible explanations may foster trustworthiness and allow for reasonable adjustments or even corrections. This work proposed an approach that incorporates three reasonability aspects into Decision Systems: feasibility, rationality, and plausibility. Thus, by providing decision makers with reasonable candidate solutions for a complex problem, they are expected to perform their tasks more effectively (i.e. decide with more efficiency as well as efficacy). The new approach is accompanied by two proofs of concept in the health and public security areas. Comparative results using random and rational approaches, including the simulation of distinct user profiles, are presented. The proposed approach achieved superior metrics with regard to feasibility and plausibility, suggesting that this proposition can be applied to real-world applications.