Diversity question generation (DQG) is an extremely challenging task, involving the generation of questions of multiple answerable and diverse vocabulary based on given contextual information and answers. Despite notable advancements in prior research, this field encounters two primary challenges: (i) there is a locality of DQG, with a lack of dedicated datasets specifically designed for sentence-level diversity question generation (SL-DQG); (ii) the decoding process often leads to issues such as word repetition, affecting the answerability of the generated questions. To address these challenges, we extract sentences containing answers from paragraphs in a paragraph-level diversity question generation (PL-DQG) dataset, constructing a dataset for SL-DQG. Then, we propose a diversity question generation model based on a contrastive search algorithm (DQG-CSA), exploring both SL-DQG and PL-DQG. Our model fine-tunes a pre-trained language model for downstream tasks, directly generating copious semantically similar and lexically diverse questions. Furthermore, we introduces a contrastive search decoding method to address word repetition issues in both SL-DQG and PL-DQG, enhancing the answerability of the generated questions. Experimental results demonstrate that incorporating the contrastive search algorithm in diversity studies at both sentence and paragraph levels, outperforms other decoding methods. And achieving a balance between answerability, fluency, and semantic similarity.