Abstract. Since the seminal work of Mann and Picard in 1994, the standard way to build high dynamic range (hdr) images from regular cameras has been to combine a reduced number of photographs captured with different exposure times. The algorithms proposed in the literature differ in the strategy used to combine these frames. Several experimental studies comparing their performances have been reported, showing in particular that a maximum likelihood estimation yields the best results in terms of mean squared error. However, no theoretical study aiming at establishing the performance limits of the hdr estimation problem has been conducted. Another common aspect of all hdr estimation approaches is that they discard saturated values. In this paper, we address these two issues. More precisely, we derive theoretical bounds for the hdr estimation problem, and we show that, even with a small number of photographs, the maximum likelihood estimator performs extremely close to these bounds. As a second contribution, we propose a general strategy to integrate the information provided by saturated pixels in the estimation process, hence improving the estimation results. Finally, we analyze the sensitivity of the hdr estimation process to camera parameters, and we show that small errors in the camera calibration process may severely degrade the estimation results.Key words. high dynamic range imaging, irradiance estimation, exposure bracketing, multi-exposure fusion, camera acquisition model, noise modeling, censored data, exposure saturation, Cramér-Rao lower bound.1. Introduction. The human eye has the ability to capture scenes of very high dynamic range, retaining details in both dark and bright regions. This is not the case for current standard digital cameras. Indeed, the limited capacity of the sensor cells makes it impossible to record the irradiance from very bright regions for long exposures. Pixels saturate incurring in information loss under the form of censored data. On the other hand, if the exposure time is reduced in order to avoid saturation, very few photons will be captured in the dark regions and the result will be masked by the acquisition noise. Therefore the result of a single shot picture of a high dynamic range scene, taken with a regular digital camera, contains pixels which are either overexposed or too noisy.High dynamic range imaging (hdr for short) is the field of imaging that seeks to accurately capture and represent scenes with the largest possible irradiance range. The representation problem of how to display an hdr image or irradiance map in a lower range image (for computer monitors or photographic prints) while retaining localized contrast, known as tone mapping, will not be addressed here. Due to technological and physical limitations of current optical sensors, nowadays the most common way to reach high irradiance dynamic ranges is by combining multiple low dynamic range photographs, acquired with different exposure times τ 1 , τ 2 , . . . , τ T . Indeed, for a given irradiance C and expos...