Context-aware neural machine translation, a paradigm that involves leveraging information beyond sentence-level context to resolve intersentential discourse dependencies and improve document-level translation quality, has given rise to a number of recent techniques. However, despite well-reasoned intuitions, most context-aware translation models yield only modest improvements over sentence-level systems. In this work, we investigate and present several core challenges, relating to discourse phenomena, context usage, model architectures, and document-level evaluation, that impede progress within the field. To address these problems, we propose a more realistic setting for document-level translation, called paragraphto-paragraph (PARA2PARA) translation, and collect a new dataset of Chinese-English novels to promote future research. 1