We investigate the utility of side information in the context of machine learning and, in particular, in supervised neural networks. Side information can be viewed as expert knowledge, additional to the input, that may come from a knowledge base. Unlike other approaches, our formalism can be used by a machine learning algorithm not only during training but also during testing. Moreover, the proposed approach is flexible as it caters for different formats of side information, and we do not constrain the side information to be fed into the input layer of the network. A formalism is presented based on the difference between the neural network loss without and with side information, stating that it is useful when adding side information reduces the loss during the test phase. As a proof of concept we provide experimental results for two datasets, the MNIST dataset of handwritten digits and the House Price prediction dataset. For the experiments we used feedforward neural networks containing two hidden layers, as well as a softmax output layer. For both datasets, side information is shown to be useful in that it improves the classification accuracy significantly.