The theory of control is being widely used in optimization of dynamical systems. Learning algorithms in neural nets or in statistics have, however, seldom used the techniques of control. One reason for this is that the neural network parameters (synaptic weights) are used quasi-statically during processing after a learning phase, while control theory determines an optimal trajectory in time for the parameters. We address this issue in the context of a neural network dynamics that we have introduced in previous publications as part of an image recognition system designed to integrate modeibased and datadriven approaches in a connectionist framework. An important feature of this approach is that recognition must be achieved explicitly through the short-, rather than the long-time behavior of the dynamics of the system. Our dynamics arises naturally from requirements on the system which include: incorporation of prior knowledge such as in inference rules, locality of inferences, and full parallelism. We have shown this system to be effective in image recognition.In this paper, after reviewing the dynamical system, we compare new algorithms for learning the dynamics with Boltzmann-machine-like formulas. We aiso point out some interesting implications of this approach, namely that of a processing strategy that uses a dynamics for the weights as well as the states of the neurons. We conclude by mentioning the difficulties that remain with a control-theoretic strategy.