Some of the first ‘automated’ vehicles to be deployed on our roads will require a system of shared driving with a human driver. While this creates technical and operational challenges, the law must also facilitate such a transfer. One method may be to obtain the driver’s consent to share operational responsibility and to delineate legal responsibility between vehicle and driver in the event of an accident. Consent is a voluntary agreement where an individual is aware of the potential consequences of their consent, including the risks. The driver of a partially automated vehicle must be informed of potential risks before giving consent to share operational responsibility. This paper will refer to the inherent dangers associated with shared operational responsibility, in particular where there has been a request for the driver to take back control from the automated vehicle during the journey. Drivers are likely to experience delay in regaining situational awareness, making such operational transfers hazardous. It is argued that where an interactive digital interface is used to convey information, such as driver responsibility, risk and legal terms, drivers may fail to sufficiently process such communications due to fundamental weaknesses in human–machine interaction. The use of an interactive digital interface alone may be inadequate to effectively communicate information to drivers. If the problems identified are not addressed, it is argued that driver consent may be inconsequential, and fail to facilitate a predicable demarcation of legal responsibility between automated vehicles and drivers. Ongoing research into automated vehicle driver training is considered as part of the preparation required to design driver education to a level whereby drivers may be able to sufficiently understand the responsibilities involved in operating a partially automated vehicle, which has implications for future driver training, licensing and certification.