This paper focuses on the evolution of Fuzzy ARTMAP neural network classifiers, using genetic algorithms, with the objective of improving generalization performance (classification accuracy of the ART network on unseen test data) and alleviating the ART category proliferation problem (the problem of creating more than necessary ART network categories to solve a classification problem). We refer to the resulting architecture as GFAM. We demonstrate through extensive experimentation that GFAM exhibits good generalization and is of small size (creates few ART categories), while consuming reasonable computational effort. In a number of classification problems, GFAM produces the optimal classifier. Furthermore, we compare the performance of GFAM with other competitive ARTMAP classifiers that have appeared in the literature and addressed the category proliferation problem in ART. We illustrate that GFAM produces improved results over these architectures, as well as other competitive classifiers.