Environments with sparse rewards and long horizons pose a significant challenge for current reinforcement learning algorithms. A key feature enabling humans to learn challenging control tasks is that they often receive expert intervention that enables them to understand the high-level structure of the task before mastering low-level control actions. We propose a framework for leveraging expert intervention to solve long-horizon reinforcement learning tasks. We consider option templates, which are specifications encoding a potential option that can be trained using reinforcement learning. We formulate expert intervention as allowing the agent to execute option templates before learning an implementation. This enables them to use an option, before committing costly resources to learning it. We evaluate our approach on three challenging reinforcement learning problems, showing that it outperforms state-of-the-art approaches by an order of magnitude. Project website: https://sites.google.com/view/stickymittens In order to apply RL effectively, to practical applications with a high-dimensional state and action space, exploration is a challenge. Intuitively, in such settings, complex sequences of actions are required to achieve any nonzero reward, which means that random exploration will take extremely long to find a nonzero reward signal. Thus, learning to improve performance can be very slow.Options are an RL tool to circumvent this problem (Sutton et al., 1999). Options are policies designed to achieve intermediate subgoals. For instance, in robot grasping tasks, an option might enable the robot to grasp a block, which