We present CLIDSUM, a benchmark dataset for building cross-lingual summarization systems on dialogue documents. It consists of 67k+ dialogue documents from two subsets (i.e., SAMSum and MediaSum) and 112k+ annotated summaries in different target languages. Based on the proposed CLIDSUM, we introduce two benchmark settings for supervised and semi-supervised scenarios, respectively. We then build various baseline systems in different paradigms (pipeline and endto-end) and conduct extensive experiments on CLIDSUM to provide deeper analyses. Furthermore, we propose mDIALBART which extends mBART-50 (a multi-lingual BART, Tang et al. 2020) via further pre-training. The multiple objectives used in the further pretraining stage help the pre-trained model capture the structural characteristics as well as important content in dialogues and the transformation from source to the target language. Experimental results show the superiority of mDIALBART, as an end-to-end model, outperforms strong pipeline models on CLIDSUM. Finally, we discuss specific challenges that current approaches faced with this task and give multiple promising directions for future research. We have released the dataset and code at https://github.com/krystalan/ ClidSum.