Distinguishing epistemic from aleatoric uncertainty is a central idea to out-of-distribution (OOD) detection. By interpreting adversarial and OOD inputs from this perspective, we can collect them into a single unclassifiable group. Rejecting such inputs mid-inference will reduce resource usage. To achieve this, we apply k-nearest neighbour (KNN) classifiers to the embedding space of branched neural networks. This introduces a novel means of additional power savings, through an early-exit reject. Our technique works out-of-the-box on any branched neural network and can be competitive on OOD benchmarks, achieving an area under receiver operator characteristic (AUROC) of over 0.9 in most datasets, and scores of 0.95+ when identifying perturbed inputs. A mixed input test set is introduced, we show how OOD inputs can be identified up to 50% of the time, and adversarial inputs up to 85% of the time. In a balanced test environment, this equates to power savings of up to 18% in the OOD scenario and 40% in the adversarial scenario. This allows a more stringent in-distribution (ID) classification policy, leading to accuracy improvements of 15% and 20% on the OOD and adversarial tests, respectively, when compared to conventional exit policies operating under the same conditions.