With the abundance of conversations happening everywhere, dialogue summarization plays an increasingly important role in the real world. However, dialogues inevitably involve lots of personal pronouns, which hinder the performance of existing dialogue summarization models. This work proposes a framework named WHORU to inject external personal pronoun resolution (PPR) information into abstractive dialogue summarization models. To reduce time and space consumption, we further propose a simple and effective PPR method for the dialogue domain. Experiments demonstrated the superiority of the proposed methods. More importantly, we achieve new SOTA results on SAMSum and AMI datasets.