The focus of this paper is on the neural network modelling approach that has gained increasing recognition in GIScience in recent years. The novelty about neural networks lies in their ability to model non‐linear processes with few, if any, a priori assumptions about the nature of the data‐generating process. The paper discusses some important issues that are central for successful application development. The scope is limited to feedforward neural networks, the leading example of neural networks. It is argued that failures in applications can usually be attributed to inadequate learning and/or inadequate complexity of the network model. Parameter estimation and a suitably chosen number of hidden units are, thus, of crucial importance for the success of real world neural network applications. The paper views network learning as an optimization problem, reviews two alternative approaches to network learning, and provides insights into current best practice to optimize complexity so to perform well on generalization tasks.