This paper presents a tutorial overview of neural networks as signal processing tools for image compression. They are well suited to the problem of image compression due to their massively parallel and distributed architecture. Their characteristics are analogous to some of the features of our own visual system, which allow us to process visual information with much ease. For example, multilayer perceptrons can be used as nonlinear predictors in differential pulse-code modulation (DPCM). Such predictors have been shown to increase the predictive gain relative to a linear predictor. Another active area of research is in the application of Hebbian learning to the extraction of principal components, which are the basis vectors for the optimal linear Karhunen-Loève transform (KLT). These learning algorithms are iterative, have some computational advantages over standard eigendecomposition techniques, and can be made to adapt to changes in the input signal. Yet another model, the self-organizing feature map (SOFM), has been used with a great deal of success in the design of codebooks for vector quantization (VQ). The resulting codebooks are less sensitive to initial conditions than the standard LBG algorithm, and the topological ordering of the entries can be exploited to further increase coding efficiency and reduce computational complexity.