Following the popularity of Unsupervised Domain Adaptation (UDA) in person re-identification, the recently proposed setting of Online Unsupervised Domain Adaptation (OUDA) attempts to bridge the gap towards practical applications by introducing a consideration of streaming data. However, this still falls short of truly representing real-world applications. This paper defines the setting of Real-world Real-time Online Unsupervised Domain Adaptation (R2OUDA) for Person Re-identification. The R2OUDA setting sets the stage for true real-world real-time OUDA, bringing to light four major limitations found in real-world applications that are often neglected in current research: system generated person images, subset distribution selection, time-based data stream segmentation, and a segment-based time constraint. To address all aspects of this new R2OUDA setting, this paper further proposes Real-World Real-Time Online Streaming Mutual Mean-Teaching (R2MMT), a novel multi-camera system for real-world person re-identification. Taking a popular person re-identification dataset, R2MMT was used to construct over 100 data subsets and train more than 3000 models, exploring the breadth of the R2OUDA setting to understand the training time and accuracy trade-offs and limitations for real-world applications.
R2MMT, a real-world system able to respect the strict constraints of the proposed R2OUDA setting, achieves accuracies within 0.1% of comparable OUDA methods that cannot be applied directly to real-world applications.