In spite of the advances accomplished throughout the last decades, automatic speech recognition (ASR) is still a challenging and di$cult task. In particular, recognition systems based on hidden Markov models (HMMs) are e!ective under many circumstances, but do su!er from some major limitations that limit applicability of ASR technology in real-world environments. Attempts were made to overcome these limitations with the adoption of arti"cial neural networks (ANN) as an alternative paradigm for ASR, but ANN were unsuccessful in dealing with long time-sequences of speech signals. Between the end of the 1980s and the beginning of the 1990s, some researchers began exploring a new research area, by combining HMMs and ANNs within a single, hybrid architecture. The goal in hybrid systems for ASR is to take advantage from the properties of both HMMs and ANNs, improving #exibility and recognition performance. A variety of di!erent architectures and novel training algorithms have been proposed in literature. This paper reviews a number of signi"cant hybrid models for ASR, putting together approaches and techniques from a highly specialistic and non-homogeneous literature. E!orts concentrate on describing and referencing architectures and algorithms, their advantages and limitations, as well as on categorizing them into broad classes. Early attempts to emulate HMMs by ANNs are "rst described. Then we focus on ANNs to estimate posterior probabilities of the states of an HMM and on`globala optimization, where a single, overall training criterion is de"ned over the HMM and the ANNs. Connectionist vector quantization for discrete HMMs, and other more recent approaches are also reviewed. It is pointed out that, in addition to their theoretical interest, hybrid systems have been allowing for tangible improvements in recognition performance over the standard HMMs in di$cult and signi"cant benchmark tasks.