To mitigate misinformation on social media, platforms such as Facebook have offered warnings to users based on the detection results of AI systems. With the evolution of AI detection systems, efforts have been devoted to applying explainable AI (XAI) to further increase the transparency of AI decision-making. Nevertheless, few factors have been considered to understand the effectiveness of a warning with AI explanations in helping humans detect misinformation. In this study, we report the results of three online human-subject experiments (N = 2,692) investigating the framing effect and the impact of an AI system’s reliability on the effectiveness of AI warning with explanations. Our findings show that the framing effect is effective for participants’ misinformation detection, whereas the AI system’s reliability is critical for humans’ misinformation detection and participants’ trust in the AI system. However, adding the explanations can potentially increase participants’ suspicions on miss errors (i.e., false negatives) in the AI system. Furthermore, more trust is shown in the AI warning without explanations condition. We conclude by discussing the implications of our findings.