Computational photography is an emerging multidisciplinary field. Over the last two decades, it has integrated studies across computer vision, computer graphics, signal processing, applied optics and related disciplines. Researchers are exploring new ways to break through the limitations of traditional digital imaging for the benefit of photographers, vision and graphics researchers, and image processing programmers. Thanks to much effort in various associated fields, the large variety of issues related to these new methods of photography are described and discussed extensively in this paper. To give the reader the full picture of the voluminous literature related to computational photography, this paper briefly reviews the wide range of topics in this new field, covering a number of different aspects, including: (i) the various elements of computational imaging systems and new sampling and reconstruction mechanisms; (ii) the different image properties which benefit from computational photography, e.g. depth of field, dynamic range; and (iii) the sampling subspaces of visual scenes in the real world. Based on this systematic review of the previous and ongoing work in this field, we also discuss some open issues and potential new directions in computational photography. This paper aims to help the reader get to know this new field, including its history, ultimate goals, hot topics, research methodologies, and future directions, and thus build a foundation for further research and related developments.