Animal learning researchers have argued that one example of a linearly nonseparable problem is negative patterning, and therefore they have used more complicated multilayer networks to study this kind of discriminant learning. However, it is shown in this paper that previous attempts to deWne negative patterning problems to artiWcial neural networks have speciWed the problem in such a way that it is much simpler than intended. The simulations described in this paper correct this problem by adding a "null" pattern to the training sets to make negative patterning problems truly nonseparable, and thus requiring a more complicated network than a perceptron. We show that with the elaborated training set, a hybrid multilayer network that treats reinforced patterns diVerently than nonreinforced patterns generates results more similar to those observed by Dalamater, Sosa, and Katz in animal experiments than do traditional multilayer networks.
An artificial neural network was trained to classify musical chords into four categories—major, dominant seventh, minor, or diminished seventh—independent of musical key. After training, the internal structure of the network was analyzed in order to determine the representations that the network was using to classify chords. It was found that the first layer of connection weights in the network converted the local representations of input notes into distributed representations that could be described in musical terms as circles of major thirds and on circles of major seconds. Hidden units then were able to use this representation to organize stimuli geometrically into a simple space that was easily partitioned by output units to classify the stimuli. This illustrates one potential contribution of artificial neural networks to cognitive informatics: the discovery of novel forms of representation in systems that can accomplish intelligent tasks.
Cognitive informatics is a field of research that is primarily concerned with the information processing of intelligent agents; it can be characterised in terms of an evolving notion of information (Wang, 2007). When it originated six decades ago, conventional accounts of information were concerned about using probability theory and statistics to measure the amount of information carried by an external signal. This, in turn, developed into the notion of modern informatics which studied information as “properties or attributes of the natural world that can be generally abstracted, quantitatively represented, and mentally processed” (Wang, 2007, p. iii). The current incarnation of cognitive informatics recognised that both information theory and modern informatics defined information in terms of factors that were external to brains, and has replaced this with an emphasis on exploring information as an internal property. This emphasis on the internal processing of information raises fundamental questions about how such information can be represented. One approach to answering such questions — and for proposing new representational accounts — would be to train a brain-like system to perform an intelligent task, and then to analyse its internal structure to determine the types of representations that the system had developed to perform this intelligent behaviour. The logic behind this approach is that when artificial neural networks Cognitive informatics is a field of research that is primarily concerned with the information processing of intelligent agents; it can be characterised in terms of an evolving notion of information (Wang, 2007). When it originated six decades ago, conventional accounts of information were concerned about using probability theory and statistics to measure the amount of information carried by an external signal. This, in turn, developed into the notion of modern informatics which studied information as “properties or attributes of the natural world that can be generally abstracted, quantitatively represented, and mentally processed” (Wang, 2007, p. iii). The current incarnation of cognitive informatics recognised that both information theory and modern informatics defined information in terms of factors that were external to brains, and has replaced this with an emphasis on exploring information as an internal property. This emphasis on the internal processing of information raises fundamental questions about how such information can be represented. One approach to answering such questions — and for proposing new representational accounts — would be to train a brain-like system to perform an intelligent task, and then to analyse its internal structure to determine the types of representations that the system had developed to perform this intelligent behaviour. The logic behind this approach is that when artificial neural networks
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.