Artificial neural networks (ANNs) are comparatively straightforward to understand and use in the analysis of scientific data. However, this relative transparency may encourage their use in an uncritical, and therefore possibly unproductive, fashion. The geometry of a network is among the most crucial factors in the successful deployment of network tools; in this review, we cover methods that can be used to determine optimum or near-optimum geometries. These methods of determining neural network architecture include the following: (i) trial and error, in which architectures chosen semirandomly are tested and modified by the user; (ii) empirical or statistical methods, in which an ANN's internal parameters are adjusted based on the model's performance; (iii) hybrid methods, such as fuzzy inference; (iv) constructive and/or pruning algorithms, that add and/or remove neurons or weights from an initial architecture, respectively, based on a predefined link between architecture and ANN performance; (v) evolutionary strategies, which search the topology space using genetic operators to vary the neural network parameters. Several case studies illustrate the development of neural network models for applications in chemistry and chemical engineering.