IntroductionImage acquisition with a CCD camera is a single-press-button activity: after selecting exposure time and adjusting illumination, a button is pressed and the acquired image is perceived as the final, unmodified proof of what was seen in the microscope. Thus it is generally assumed that the image processing steps of e.g., "dark-current correction" and "gain normalization" do not alter the information content of the image, but rather eliminate unwanted artifacts. Image quality therefore is, among a long list of other parameters, defined by the dynamic range of the CCD camera as well as the maximum allowable exposure time depending on sample drift (ignoring sample damage). Despite the fact that most microscopists are satisfied with present, standard image quality we found that it is a relatively easy to improve on existing routines in at least two aspects [1]:1. Suppression of lateral image drift during acquisition by using significantly shorter exposure times with a plurality of exposures (3D-data set). 2. Improvement in the Signal/Noise ratio by averaging over a given data set by exceeding the dynamic range of the camera.
BackgroundImages recorded in an electron microscope are usually band-width limited. This can be verified easily by taking a Fourier transform (FFT) of a given image and checking that no structures contribute visibly at the outer edges of the Fourier transform (also known as the Nyquist limit [2]). Otherwise, the image could contain artifacts and should be discarded if in doubt. For the following, it will be assumed that the acquired images are of reasonable quality and thus inherently bandwidth limited at the Nyquist limit. As a reminder, a digital image consists of M×N, usually quadratic, pixels with distance d between consecutive pixels in x-and y-direction. Most importantly: each pixel represents a point and not an image area! This little detail sometimes is conveniently overlooked (mostly without causing any damage), but should be remembered when the relation between the image intensity I(x,y) as a function of x-and y-coordinates, versus the pixel values P m,n that represent the intensity function at selected points with coordinates (m,n) needs to be considered.