a) Previously described result (b) Previously described result, gained (c) Our result Fig. 1. We present a system that uses a novel combination of motion-adaptive burst capture, robust temporal denoising, learning-based white balance, and tone mapping to create high quality photographs in low light on a handheld mobile device. Here we show a comparison of a photograph generated by the burst photography system described in (Hasino et al. 2016) and the system described in this paper, running on the same mobile camera. In this low-light se ing (about 0.4 lux), the previous system generates an underexposed result (a). Brightening the image (b) reveals significant noise, especially chroma noise, which results in loss of detail and an unpleasantly blotchy appearance. Additionally, the colors of the face appear too orange. Our pipeline (c) produces detailed images by selecting a longer exposure time due to the low scene motion (in this se ing, extended from 0.14 s to 0.33 s), robustly aligning and merging a larger number of frames (13 frames instead of 6), and reproducing colors reliably by training a model for predicting the white balance gains specifically in low light. Additionally, we apply local tone mapping that brightens the shadows without over-clipping highlights or sacrificing global contrast.Taking photographs in low light using a mobile phone is challenging and rarely produces pleasing results. Aside from the physical limits imposed by read noise and photon shot noise, these cameras are typically handheld, have small apertures and sensors, use mass-produced analog electronics that cannot easily be cooled, and are commonly used to photograph subjects that move, like children and pets. In this paper we describe a system for capturing clean, sharp, colorful photographs in light as low as 0.3 lux, where human vision becomes monochromatic and indistinct. To permit handheld photography without ash illumination, we capture, align, and combine multiple frames. Our system employs "motion metering", which uses an estimate of motion magnitudes (whether due to handshake or moving objects) to identify the number of frames and the per-frame exposure times that together minimize both noise and motion blur in a captured burst. We combine these frames using robust alignment and merging techniques that are specialized for high-noise imagery. To ensure accurate colors in such low light, we employ a learning-based auto white balancing algorithm. To prevent the photographs from looking like they were shot in daylight, we use tone mapping techniques inspired by illusionistic painting: increasing contrast, crushing shadows to black, and surrounding the scene with darkness. All of these processes are performed using the limited computational resources of a mobile device. Our system can be used by novice photographers to produce shareable pictures in a few seconds based on a single shu er press, even in environments so dim that humans cannot see clearly.