Video Synopsis Generation Using Spatio-Temporal Groups
Millions of surveillance cameras operate at 24x7 generating huge amount of visual data for processing. However, retrieval of important activities from such a large data can be time consuming. Thus, researchers are working on finding solutions to present hours of visual data in a compressed, but meaningful way. Video synopsis is one of the ways to represent activities using relatively shorter duration clips. So far, two main approaches have been used by researchers to address this problem, namely synopsis by tracking moving objects and synopsis by clustering moving objects. Synopses outputs, mainly depend on tracking, segmenting, and shifting of moving objects temporally as well as spatially. In many situations, tracking fails, thus produces multiple trajectories of the same object. Due to this, the object may appear and disappear multiple times within the same synopsis output, which is misleading. This also leads to discontinuity and often can be confusing to the viewer of the synopsis. In this paper, we present a new approach for generating compressed video synopsis by grouping tracklets of moving objects. Grouping helps to generate a synopsis where chronologically related objects appear together with meaningful spatio-temporal relation. Our proposed method produces continuous, but a less confusing synopses when tested on publicly available dataset videos as well as in-house dataset videos.
READ FULL TEXT