Designing a customized KeyWord Spotting (KWS) Deep Neural Network (DNN) for tiny sensors is a time-consuming process, demanding training a new model on a remote server with a dataset of collected keywords. This paper investigates the effectiveness of a DNN-based KWS classifier that can be initialized on-device simply by recording a few examples of the target commands. At runtime, the classifier computes the distance between the DNN output and the prototypes of the recorded keywords. By experimenting with multiple TinyML models on the Google Speech Command dataset, we report an accuracy of up to 80% using only ten examples of utterances not seen during training. When deployed on a multi-core microcontroller with a power envelope of 25 mW, the most accurate ResNet15 model takes 9.7 msec to process a 1 sec speech frame, demonstrating the feasibility of on-device KWS customization for tiny devices without requiring any backpropagation-based transfer learning.A voice command sensor placed on everyday objects can recognize a set of target keywords to enable speech-controlled functionalities. The keyword classification algorithm, commonly denoted as keyword spotting (KWS) in the literature 1 , runs locally on-device to process the audio data recorded by the microphone. These smart sensors are typically battery-powered and Micro-Controllers Units (MCUs) are used as data processing engines to meet the stringent energy requirements. MCUs feature a power consumption of up to a few tens of mWs but, on the other side, present limited computation power and on-chip memory capacity, making the porting of robust speech processing pipelines on-device highly challenging.Recently, Deep Neural Networks (DNN) for Keyword Spotting have been efficiently implemented on low-power MCUs 2 . These DNN solutions feature up to only a few hundred of thousand parameters to fit the memory constraints of tiny MCU devices, typically lower than a few MBs. Small-sized solutions,