Neural representations change, even in the absence of overt learning. To preserve stable behavior and memories, the brain must track these changes. Here, we explore homeostatic mechanisms that could allow neural populations to track drift in continuous representations without external error feedback. We build on existing models of Hebbian homeostasis, which have been shown to stabilize representations against synaptic turnover and allow discrete neuronal assemblies to track representational drift. We show that a downstream readout can use its own activity to detect and correct drift, and that such a self-healing code could be implemented by plausible synaptic rules. Population response normalization and recurrent dynamics could stabilize codes further. Our model reproduces aspects of drift observed in experiments, and posits neurally plausible mechanisms for long-term stable readouts from drifting population codes.