Driving safety has been a concern since the first cars appeared on the streets. Driver inattention has been singled out as a major cause of accidents early on. This is hardly surprising, as drivers routinely perform other tasks in addition to controlling the vehicle. Decades of research into what causes lapses or misdirection of drivers' attention resulted in improvements in road safety through better design of infrastructure, driver training programs, in-vehicle interfaces, and, more recently, the development of driving assistance systems (ADAS) and driving automation. This review focuses on the methods for modeling and detecting spatio-temporal aspects of drivers' attention, i.e. where and when they look, for the two latter categories of applications.We start with a brief theoretical background on human visual attention, methods for recording and measuring attention in the driving context, types of driver inattention, and factors causing it. We then discuss machine learning approaches for 1) modeling gaze for assistive and self-driving applications and 2) detecting gaze for driver monitoring. Following the overview of state-of-the-art models, we provide an extensive list of publicly available datasets that feature recordings of drivers' gaze and other attention-related annotations. We conclude with a general overview of the remaining challenges, such as data availability and quality, evaluation methods, and the limited scope of attention modeling, and outline steps toward rectifying some of these issues. Categorized and annotated lists of the reviewed models and datasets are available at https://github.com/ykotseruba/attention_and_driving