Robust Visual Tracking by Motion Analyzing
In recent years, Video Object Segmentation (VOS) has emerged as a complementary method to Video Object Tracking (VOT). VOS focuses on classifying all the pixels around the target, allowing for precise shape labeling, while VOT primarily focuses on the approximate region where the target might be. However, traditional segmentation modules usually classify pixels frame by frame, disregarding information between adjacent frames. In this paper, we propose a new algorithm that addresses this limitation by analyzing the motion pattern using the inherent tensor structure. The tensor structure, obtained through Tucker2 tensor decomposition, proves to be effective in describing the target's motion. By incorporating this information, we achieved competitive results on Four benchmarks LaSOT<cit.>, AVisT<cit.>, OTB100<cit.>, and GOT-10k<cit.> LaSOT<cit.> with SOTA. Furthermore, the proposed tracker is capable of real-time operation, adding value to its practical application.
READ FULL TEXT