Deep neural network (DNN) models are being deployed in safety-critical embedded devices for object identification, recognition, and even trajectory prediction. Optimised versions of such models, in particular the convolutional ones, are becoming increasingly common in resource-constrained edge-computing devices (e.g., sensors, drones), which typically rely on reduced memory footprint, low power budget and low-performance microprocessors. DNN models are prone to radiation-induced soft errors, and tackling their occurrence in resource-constrained devices is a mandatory and substantial challenge. While traditional replication-based soft error mitigation techniques will likely account for a reasonable performance penalty, hardware solutions are even more costly. To undertake this almost contradictory challenge, this work evaluates the efficiency of a lightweight software-based mitigation technique, called Register Allocation Technique (RAT), when applied to a convolutional neural network (CNN) model running on two commercial Arm microprocessors (i.e., Cortex-M4 and M7) under the effects of neutron radiation. Gathered results obtained from two neutron radiation campaigns suggest that RAT can reduce the number of critical faults in the CNN model running on both Arm Cortex-M microprocessors. Results also suggest that the SDC FIT rate of the RAT-hardened CNN model can be reduced in up to 83% with a runtime overhead of 32%.