Achieving personalized intelligence at the edge with real-time learning capabilities holds enormous promise to enhance our daily experiences and assist in decision-making, planning, and sensing.
Yet, today's technology encounters difficulties with efficient and reliable learning at the edge, due to a lack of personalized data, insufficient hardware capabilities, and the inherent challenges posed by online learning.
Over time and across multiple developmental phases, the brain has evolved to efficiently incorporate new knowledge by gradually building on previous knowledge.
In this work, we emulate this process in digital neuromorphic technology emulating the neural and synaptic processes of the brain by means of two stages of learning.
Initially, a meta-training phase trains the learning hardware's hyperparameters for one-shot learning by deploying a differentiable simulation of local three-factor synaptic plasticity rules implemented in the neuromorphic chip.
This meta-training process refines the synaptic plasticity and related hyperparameters to align with the specific dynamics inherent in the hardware and the given task domain.
During the subsequent deployment stage, these optimized hyperparameters enable fast, data-efficient and accurate learning of new classes.
We demonstrate our approach using event-driven vision sensor data and the Intel Loihi neuromorphic processor with its plasticity dynamics, achieving state-of-the-art accuracy on learning new categories in one-shot in real-time among three distinct task domains.
Our methodology can be deployed with arbitrary plasticity models and can be applied to situations demanding quick learning and adaptation at the edge, such as navigating unfamiliar environments, or learning unexpected categories of data through user engagement.