Pixel count is the ratio of the solid angle within a camera's field of view to the solid angle covered by a single detector element. Because the size of the smallest resolvable pixel is proportional to aperture diameter and the maximum field of view is scale independent, the diffraction-limited pixel count is proportional to aperture area. At present, digital cameras operate near the fundamental limit of 1-10 megapixels for millimetre-scale apertures, but few approach the corresponding limits of 1-100 gigapixels for centimetre-scale apertures. Barriers to high-pixel-count imaging include scale-dependent geometric aberrations, the cost and complexity of gigapixel sensor arrays, and the computational and communications challenge of gigapixel image management. Here we describe the AWARE-2 camera, which uses a 16-mm entrance aperture to capture snapshot, one-gigapixel images at three frames per minute. AWARE-2 uses a parallel array of microcameras to reduce the problems of gigapixel imaging to those of megapixel imaging, which are more tractable. In cameras of conventional design, lens speed and field of view decrease as lens scale increases, but with the experimental system described here we confirm previous theoretical results suggesting that lens speed and field of view can be scale independent in microcamera-based imagers resolving up to 50 gigapixels. Ubiquitous gigapixel cameras may transform the central challenge of photography from the question of where to point the camera to that of how to mine the data.
The morphology of three-dimensional foams is of interest to physicists, engineers, and mathematicians. It is desired to image the 3-dimensional structure of the foam. Many different techniques have been used to image the foam, including magnetic resonance imaging, and short-focal length lenses. We use a camera and apply tomographic algorithms to accurately image a set of bubbles. We correct for the distortion of a curved plexiglas container using ray-tracing.
We recently implemented a heterogeneous network of infrared motion detectors and an infrared camera for the detection, localization, tracking, and identification of human targets. The network integrates dense deployments of low cost motion sensors for target tracking with sparse deployments of image sensors for target registration. Such networks can be used in tactical applications for local and distributed perimeter and site security. Rapid deployments for crisis management may be of particular interest. This paper focuses particularly on the need for applications that deal with relatively dense and complex source fields such as crowds move through sensor spaces.
System requirements for many military electro-optic and IR camera systems reflect the need for both wide-field-of-view situational awareness as well as high-resolution imaging for target identification. In this work we present a new imaging system architecture designed to perform both functions simultaneously and the AWARE 10 camera as an example at visible wavelengths. We first describe the basic system architecture and user interface followed by a laboratory characterization of the system optical performance. We then describe a field experiment in which the camera was used to identify several maritime targets at varying range. The experimental results indicate that users of the system are able to correctly identify ~10 m targets at between 4 and 6 km with 70% accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.