DisCoVQA: Temporal Distortion-Content Transformers for Video Quality Assessment

06/20/2022
by   Haoning Wu, et al.
0

The temporal relationships between frames and their influences on video quality assessment (VQA) are still under-studied in existing works. These relationships lead to two important types of effects for video quality. Firstly, some temporal variations (such as shaking, flicker, and abrupt scene transitions) are causing temporal distortions and lead to extra quality degradations, while other variations (e.g. those related to meaningful happenings) do not. Secondly, the human visual system often has different attention to frames with different contents, resulting in their different importance to the overall video quality. Based on prominent time-series modeling ability of transformers, we propose a novel and effective transformer-based VQA method to tackle these two issues. To better differentiate temporal variations and thus capture the temporal distortions, we design a transformer-based Spatial-Temporal Distortion Extraction (STDE) module. To tackle with temporal quality attention, we propose the encoder-decoder-like temporal content transformer (TCT). We also introduce the temporal sampling on features to reduce the input length for the TCT, so as to improve the learning effectiveness and efficiency of this module. Consisting of the STDE and the TCT, the proposed Temporal Distortion-Content Transformers for Video Quality Assessment (DisCoVQA) reaches state-of-the-art performance on several VQA benchmarks without any extra pre-training datasets and up to 10 better generalization ability than existing methods. We also conduct extensive ablation experiments to prove the effectiveness of each part in our proposed model, and provide visualizations to prove that the proposed modules achieve our intention on modeling these temporal issues. We will publish our codes and pretrained weights later.

READ FULL TEXT

page 1

page 2

page 3

page 9

page 11

page 12

research
07/31/2023

Capturing Co-existing Distortions in User-Generated Content for No-reference Video Quality Assessment

Video Quality Assessment (VQA), which aims to predict the perceptual qua...
research
03/28/2022

Visual Mechanisms Inspired Efficient Transformers for Image and Video Quality Assessment

Visual (image, video) quality assessments can be modelled by visual feat...
research
10/09/2022

HVS Revisited: A Comprehensive Video Quality Assessment Framework

Video quality is a primary concern for video service providers. In recen...
research
10/10/2022

DCVQE: A Hierarchical Transformer for Video Quality Assessment

The explosion of user-generated videos stimulates a great demand for no-...
research
12/27/2020

Learning Generalized Spatial-Temporal Deep Feature Representation for No-Reference Video Quality Assessment

In this work, we propose a no-reference video quality assessment method,...
research
10/11/2022

Neighbourhood Representative Sampling for Efficient End-to-end Video Quality Assessment

The increased resolution of real-world videos presents a dilemma between...
research
08/01/2023

Ada-DQA: Adaptive Diverse Quality-aware Feature Acquisition for Video Quality Assessment

Video quality assessment (VQA) has attracted growing attention in recent...

Please sign up or login with your details

Forgot password? Click here to reset