Joint learning of images and videos with a single Vision Transformer

08/21/2023
by   Shuki Shimizu, et al.
0

In this study, we propose a method for jointly learning of images and videos using a single model. In general, images and videos are often trained by separate models. We propose in this paper a method that takes a batch of images as input to Vision Transformer IV-ViT, and also a set of video frames with temporal aggregation by late fusion. Experimental results on two image datasets and two action recognition datasets are presented.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset