Photography usually requires optics in conjunction with a recording device (an image sensor). Eliminating the optics could lead to new form factors for cameras. Here, we report a simple demonstration of imaging using a bare CMOS sensor that utilizes computation. The technique relies on the space variant point-spread functions resulting from the interaction of a point source in the field of view with the image sensor. These space-variant point-spread functions are combined with a reconstruction algorithm in order to image simple objects displayed on a discrete LED array as well as on an LCD screen. We extended the approach to video imaging at the native frame rate of the sensor. Finally, we performed experiments to analyze the parametric impact of the object distance. Improving the sensor designs and reconstruction algorithms can lead to useful cameras without optics.The optical systems of cameras in mobile devices typically constrain the overall thickness of the devices 1-2 . By eliminating the optics, it is possible to create ultra-thin cameras with interesting new form factors. Previous work in computational photography has eliminated the need for lenses by utilizing apertures in front of the image sensor 3-7 or via coherent illumination of the sample 8 . In the former case, the apertures create shadow patterns on the sensor that could be computationally recovered by solving a linear inverse problem 9 . The latter case requires coherent illumination, which is not generally applicable to imaging. In most instances, coded apertures have replaced the lenses. Microfabricated coded apertures have recently shown potential for the thinner systems, 4 with thickness on the order of millimeters. However, these apertures are absorbing and hence, exhibit relatively low transmission efficiencies. Another method utilizes holographic phase masks integrated onto the image sensor in conjunction with computation to enable simple imaging. 10,11 In this case, precise microfabrication of the mask onto the sensor is required.Another computational camera utilizes a microlens array to form a large number of partial images of the scene, which is then numerically combined to form a single image with computational refocusing [12][13][14] . Here, we report on a computational camera that is comprised of only a conventional image sensor and no other elements.Our motivation for this camera is based upon the recognition that all cameras essentially rely on the fact that the information about the object enters the aperture of the lens, the coded aperture or micro-lens array, and is recorded by the image sensor. In the case of the coded aperture and the microlens array, numerical processing is performed to represent the image for human consumption. If all optical elements are eliminated, the information from the object is still recorded by the image sensor. If appropriate reconstruction algorithms were developed, the image recorded by the sensor can be subsequently recovered for human consumption. It is analogous the multi-sensory compressive ...