Functional and mechanistic comparisons are made between several network models of cognitive processing: competitive learning, interactive activation, adaptive resonance, ond back propagation. The starting point of this comparison is the article of Rumelhart and Zipser (1985) on feature discovery through compotitive learning. All the models which Rumelhart and Zipser (1985) have described were shown in Grossberg (1976b) to exhibit a iype of learning which is temporally unstable. Competitive learning mechanisms con be stabilized in response to an arbitrary input environmmt by being supplemented with mechanisms for learning top-down expoctancies, or templates; for matching bottom-up input patternr with the top-down expectancies: and for releasing orienting reactions in a mismatch situation, thereby updating short-term memory and searching for another internal representation. Network architectures which embody all of these mochanisms were called adaptive resonance models by Grossberg (1976~). Self-stobilizing learning models are candidater for use in root-world applications where unpredictable changes can occur in complex input environments. Competitive learning postulates are inconsistent with the postulates of the interactive activation model of McClelland and Rumelhart (1981), and suggest different levels of processing and interaction rules for the analysis of word recognition. Adaptive resononce models use these alternative levels and interaction rules. The selforganizing learning of an adoptive resonance model is compared and contrasted with the teacher-directed learning of a back propogation model. A number of criteria for evaluating real-time network models of cognitive processing are described and applied.