Conventional dialogue summarization methods directly generate summaries and do not consider user's specific interests. This poses challenges in cases where the users are more focused on particular topics or aspects. With the advancement of instruction-finetuned language models, we introduce instruction-tuning to dialogues to expand the capability set of dialogue summarization models. To overcome the scarcity of instructive dialogue summarization data, we propose a three-step approach to synthesize high-quality query-based summarization triples. This process involves summaryanchored query generation, query filtering and query-based summary generation. By training a unified model called InstructDS (Instructive Dialogue Summarization) on three summarization datasets with multi-purpose instructive triples, we expand the capability of dialogue summarization models. We evaluate our method on four datasets, including dialogue summarization and dialogue reading comprehension. Experimental results show that our approach outperforms the state-of-the-art models and even models with larger sizes. Additionally, our model exhibits higher generalizability and faithfulness, as confirmed by human subjective evaluations. Benjamin: Hey guys, what are we doing with the keys today? Hilary: I've got them. Whoever wants them can meet me at lunchtime or after Elliot: I'm ok. We're meeting for the drinks in the evening anyway and … Benjamin: Interesting To be honest, Hilary, I almost feel like changing my mind. Wanting to take this nap might end up costing me to dear … Hilary: Do join us, we're going to have fun. And then you'll take the keys and take this most deserved of naps Elliot: Sounds like a plan Hilary: Elliot: See you at 2 then xx ! Instruction: Where is Hilary having lunch? Output: La Cantina Instruction: Summarize the dialogue with about 15 words. Output: Benjamin, Hilary and Elliot are discussing about their plans for the day after getting the apartment keys.