Experts rely on fraud detection and decision support systems to analyze fraud cases, a growing problem in digital retailing and banking. With the advent of Artificial Intelligence (AI) for decision support, those experts face the black-box problem and lack trust in AI predictions for fraud. Such an issue has been tackled by employing Explainable AI (XAI) to provide experts with explained AI predictions through various explanation methods. However, fraud detection studies supported by XAI lack a user-centric perspective and discussion on how principles are deployed, both important requirements for experts to choose an appropriate explanation method. On the other hand, recent research in Information Systems (IS) and Human-Computer Interaction highlights the need for understanding user requirements to develop tailored design principles for decision support systems. In this research, we adopt a design science research methodology and IS theoretical lens to develop and evaluate design principles, which align fraud expert's tasks with explanation methods for Explainable AI decision support. We evaluate the utility of these principles using an information quality framework to interview experts in banking fraud, plus a simulation. The results show that the principles are an useful tool for designing decision support systems for fraud detection with embedded user-centric Explainable AI.