We propose TRACIE, a novel temporal reasoning dataset that evaluates the degree to which systems understand implicit eventsevents that are not mentioned explicitly in natural language text but can be inferred from it. This introduces a new challenge in temporal reasoning research, where prior work has focused on explicitly mentioned events. Human readers can infer implicit events via commonsense reasoning, resulting in a more comprehensive understanding of the situation and, consequently, better reasoning about time. We find, however, that state-of-the-art models struggle when predicting temporal relationships between implicit and explicit events. To address this, we propose a neuro-symbolic temporal reasoning model, SYMTIME, which exploits distant supervision signals from largescale text and uses temporal rules to combine start times and durations to infer end times. SYMTIME outperforms strong baseline systems on TRACIE by 5%, and by 11% in a zero prior knowledge training setting. Our approach also generalizes to other temporal reasoning tasks, as evidenced by a gain of 1%-9% on MATRES, an explicit event benchmark. * Most of the work was done when the third author was employed at the Allen Institute for AI and the first author was an intern there.Farrah was driving home from school. A person was riding a bicycle in front of her. Farrah looked away for a second. She didn't notice that he stopped. She tried to brake but it was too late. The person recovered soon. Context Story Latent Timeline ride stopped get hit injured A person drive hit try regret get home distracted explicit events implicit events not-inferrable distracted starts before try ✅ entailment distracted ends after try ❌ contradiction recovered Tracie Instance