The use of Neural Network (NN) inference on edge devices necessitates the development of customized Neural Inference Accelerators (NIA) in an attempt to meet performance and accuracy requirements. However, edge infrastructure often relies on highly constrained resources with limited power budget and area footprint. At the same time, reliability is very crucial, especially, for critical applications and trade-offs area and power due to the needed for protection. In this paper, we study the soft-error vulnerability of an edge NIA, using an emulationbased fault injection framework, which allows for accurate and fine-grained analysis. We consider the tinyTPU architecture, which resembles Google's Tensor Processing Unit (TPU) but is optimized for edge-based applications. Through a proposed error outcome taxonomy for NN-based algorithms, we study the criticality of each NIA component and explore their vulnerability to Single Event Upsets (SEU), while providing analysis of performance-accuracy trade-offs such as using smaller NN models and periodic memory refresh. Further, through analysis of the tinyTPU architecture, we manage to considerably reduce the emulation time for components with non-persistent faults.