We introduce and study knowledge drift (KD), a special form of concept drift that occurs in hierarchical classification. Under KD the vocabulary of concepts, their individual distributions, and the is-a relations between them can all change over time. The main challenge is that, since the ground-truth concept hierarchy is unobserved, it is hard to tell apart different forms of KD. For instance, the introduction of a new is-a relation between two concepts might be confused with changes to those individual concepts, but it is far from equivalent. Failure to identify the right kind of KD compromises the concept hierarchy used by the classifier, leading to systematic prediction errors. Our key observation is that in human-in-the-loop applications like smart personal assistants the user knows what kind of drift occurred recently, if any. Motivated by this observation, we introduce trckd, a novel approach that combines two automated stages—drift detection and adaptation—with a new interactive disambiguation stage in which the user is asked to refine the machine’s understanding of recently detected KD. In addition, trckd implements a simple but effective knowledge-aware adaptation strategy. Our simulations show that, when the structure of the concept hierarchy drifts, a handful of queries to the user are often enough to substantially improve prediction performance on both synthetic and realistic data.