With the increasing availability of head-mounted displays (HMDs) that show immersive 360 • VR content, it is important to understand to what extent these immersive experiences can evoke emotions. Typically to collect emotion ground truth labels, users rate videos through postexperience self-reports that are discrete in nature. However, post-stimuli self-reports are temporally imprecise, especially after watching 360 • videos. In this work, we design six continuous emotion annotation techniques for the Oculus Rift HMD aimed at minimizing workload and distraction. Based on a co-design session with six experts, we contribute HaloLight and DotSize, two continuous annotation methods deemed unobtrusive and easy to understand. We discuss the next challenges for evaluating the usability of these techniques, and reliability of continuous annotations.