We propose a novel method for capturing high-speed, high dynamic range video with a single low-speed camera using a coded sampling technique. Traditional video cameras use a constant full-frame exposure time, which makes temporal super-resolution difficult due to the ill-posed nature of inverting the sampling operation. Our method samples at the same rate as the traditional low-speed camera but uses random per-pixel exposure times and offsets. By exploiting temporal and spatial redundancy in the video, we can reconstruct a high-speed video from the coded input. Furthermore, the different exposure times used in our sampling scheme enable us to obtain a higher dynamic range than a traditional camera or other temporal superresolution methods. We validate our approach using simulation and provide a detailed discussion on how to make a hardware implementation. In particular, we believe that our approach can maintain a 100% light throughput similarly to existing cameras and can be implemented on a single chip, making it suitable for small form factors.