Fast Semantic Segmentation on Video Using Motion Vector-Based Feature Interpolation

03/21/2018
by   Samvit Jain, et al.
0

Models optimized for accuracy on challenging, dense prediction tasks such as semantic segmentation entail significant inference costs, and are prohibitively slow to run on each frame in a video. Since nearby video frames are spatially similar, however, there is substantial opportunity to reuse computation. Existing work has explored basic feature reuse and feature warping based on optical flow, but has encountered limits to the speedup attainable with these techniques. In this paper, we present a new, two part approach to accelerating inference on video. Firstly, we propose a fast feature propagation scheme that utilizes the block motion vector maps present in compressed video to cheaply propagate features from frame to frame. Secondly, we develop a novel feature estimation scheme, termed feature interpolation, that fuses features propagated from enclosing keyframes to render accurate feature estimates, even at sparse keyframe frequencies. We evaluate our system on the Cityscapes dataset, comparing to both a frame-by-frame baseline and related work. We find that we are able to substantially accelerate segmentation on video, achieving almost twice the average inference speed as prior work at any target accuracy level.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset