Neural Networks (NNs) can solve very hard classification and estimation tasks but are less well suited to solve complex sensor fusion challenges, such as end-to-end control of autonomous vehicles. Nevertheless, NN can still be a powerful tool for particular sub-problems in sensor fusion. This would require a reliable and quantifiable measure of the stochastic uncertainty in the predictions that can be compared to classical sensor measurements. However, current NN's output some figure of merit, that is only a relative model fit and not a stochastic uncertainty. We propose to embed the NN's in a proper stochastic system identification framework. In the training phase, the stochastic uncertainty of the parameters in the (last layers of the) NN is quantified. We show that this can be done recursively with very few extra computations. In the classification phase, Monte-Carlo (MC) samples are used to generate a set of classifier outputs. From this set, a distribution of the classifier output is obtained, which represents a proper description of the stochastic uncertainty of the predictions. We also show how to use the calculated uncertainty for outlier detection by including an artificial outlier class. In this way, the NN fits a sensor fusion framework much better. We evaluate the approach on images of handwritten digits. The proposed method is shown to be on par with MC dropout, while having lower computational complexity, and the outlier detection almost completely eliminates false classifications.