Model Reconciliation Problems (MRPs) and their variant, Logic-based MRPs (L-MRPs), have emerged as popular methods for explainable planning problems. Both MRP and L-MRP approaches assume that the explaining agent has access to an assumed model of the human user receiving the explanation, and it reconciles its own model with the human model to find the differences such that when they are provided as explanations to the human, they will understand them. However, in practical applications, the agent is likely to be fairly uncertain on the actual model of the human and wrong assumptions can lead to incoherent or unintelligible explanations. In this paper, we propose a less stringent requirement: The agent has access to a task-specific vocabulary known by the human and, if available, a human model capturing confidently-known information. Our goal is to find a personalized explanation, which is an explanation that is at an appropriate abstraction level with respect to the human’s vocabulary and model. Using a logic-based method called knowledge forgetting for generating abstractions, we propose a simple framework compatible with L-MRP approaches, and evaluate its efficacy through computational and human user experiments.