Discriminative and Robust Online Learning for Siamese Visual Tracking

09/06/2019
by   Jinghao Zhou, et al.
16

The problem of visual object tracking has traditionally been handled by variant tracking paradigms, either learning a model of the object's appearance exclusively online or matching the object with the target in an offline-trained embedding space. Despite the recent success, each method agonizes over its intrinsic constraint. The online-only approaches suffer from a lack of generalization of the model they learn thus are inferior in target regression, while the offline-only approaches (e.g., convontional siamese trackers) lack the video-specific context information thus are not discriminative enough to handle distractors. Therefore, we propose a parallel framework to integrate offline-trained siamese networks with a lightweight online module for enhance the discriminative capability. We further apply a simple yet robust template update strategy for siamese networks, in order to handle object deformation. Robustness can be validated in the consistent improvement over three siamese baselines: SiamFC, SiamRPN++, and SiamMask. Beyond that, our model based on SiamRPN++ obtains the best results over six popular tracking benchmarks. Though equipped with an online module when tracking proceeds, our approach inherits the high efficiency from siamese baseline and can operate beyond real-time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset