Large-scale pre-trained language models have demonstrated strong knowledge representation ability. However, recent studies suggest that even though these giant models contain rich simple commonsense knowledge (e.g., bird can fly and fish can swim.), they often struggle with complex commonsense knowledge that involves multiple eventualities (verbcentric phrases, e.g., identifying the relationship between "Jim yells at Bob" and "Bob is upset"). To address this issue, in this paper, we propose to help pre-trained language models better incorporate complex commonsense knowledge. Unlike direct finetuning approaches, we do not focus on a specific task and instead propose a general language model named CoCoLM. Through the careful training over a large-scale eventuality knowledge graph ASER, we successfully teach pre-trained language models (i.e., BERT and RoBERTa) rich discourse-level commonsense knowledge among eventualities. Experiments on multiple commonsense tasks that require the correct understanding of eventualities demonstrate the effectiveness of CoCoLM. * Equal contribution. Query Answer Birds can [MASK]. fly Cars are used for [MASK]. transport Jim yells at Bob, [MASK] Jim is upset. but Jim yells at Bob, [MASK] Bob is upset. but