Real-Time Online Unsupervised Domain Adaptation for Real-World Person Re-identification
Following the popularity of Unsupervised Domain Adaptation (UDA) in person re-identification, the recently proposed setting of Online Unsupervised Domain Adaptation (OUDA) attempts to bridge the gap towards practical applications by introducing a consideration of streaming data. However, this still falls short of truly representing real-world applications. This paper defines the setting of Real-world Real-time Online Unsupervised Domain Adaptation (R^2OUDA) for Person Re-identification. The R^2OUDA setting sets the stage for true real-world real-time OUDA, bringing to light four major limitations found in real-world applications that are often neglected in current research: system generated person images, subset distribution selection, time-based data stream segmentation, and a segment-based time constraint. To address all aspects of this new R^2OUDA setting, this paper further proposes Real-World Real-Time Online Streaming Mutual Mean-Teaching (R^2MMT), a novel multi-camera system for real-world person re-identification. Taking a popular person re-identification dataset, R^2MMT was used to construct over 100 data subsets and train more than 3000 models, exploring the breadth of the R^2OUDA setting to understand the training time and accuracy trade-offs and limitations for real-world applications. R^2MMT, a real-world system able to respect the strict constraints of the proposed R^2OUDA setting, achieves accuracies within 0.1 real-world applications.
READ FULL TEXT