Many real-world networks are directed, sparse and hierarchical, with a mixture of feed-forward and feedback connections with respect to the hierarchy. Moreover, a small number of 'master' nodes are often able to drive the whole system. We study the dynamics of pattern presentation and recovery on sparse, directed, Hopfield-like neural networks using Trophic Analysis to characterise their hierarchical structure. This is a recent method which quantifies the local position of each node in a hierarchy (trophic level) as well as the global directionality of the network (trophic coherence). We show that even in a recurrent network, the state of the system can be controlled by a small subset of neurons which can be identified by their low trophic levels. We also find that performance at the pattern recovery task can be significantly improved by tuning the trophic coherence and other topological properties of the network. This may explain the relatively sparse and coherent structures observed in the animal brain, and provide insights for improving the architectures of artificial neural networks. Moreover, we expect that the principles we demonstrate here will be relevant for a broad class of system whose underlying network structure is directed and sparse, such as biological, social or financial networks.