A Deep Learning based No-reference Quality Assessment Model for UGC Videos

by   Wei Sun, et al.

Quality assessment for User Generated Content (UGC) videos plays an important role in ensuring the viewing experience of end-users. Previous UGC video quality assessment (VQA) studies either use the image recognition model or the image quality assessment (IQA) models to extract frame-level features of UGC videos for quality regression, which are regarded as the sub-optimal solutions because of the domain shifts between these tasks and the UGC VQA task. In this paper, we propose a very simple but effective UGC VQA model, which tries to address this problem by training an end-to-end spatial feature extraction network to directly learn the quality-aware spatial feature representation from raw pixels of the video frames. We also extract the motion features to measure the temporal-related distortions that the spatial features cannot model. The proposed model utilizes very sparse frames to extract spatial features and dense frames (i.e. the video chunk) with a very low spatial resolution to extract motion features, which thereby has low computational complexity. With the better quality-aware features, we only use the simple multilayer perception layer (MLP) network to regress them into the chunk-level quality scores, and then the temporal average pooling strategy is adopted to obtain the video-level quality score. We further introduce a multi-scale quality fusion strategy to solve the problem of VQA across different spatial resolutions, where the multi-scale weights are obtained from the contrast sensitivity function of the human visual system. The experimental results show that the proposed model achieves the best performance on five popular UGC VQA databases, which demonstrates the effectiveness of the proposed model. The code will be publicly available.


Deep Learning based Full-reference and No-reference Quality Assessment Models for Compressed UGC Videos

In this paper, we propose a deep learning based video quality assessment...

Deep Neural Network for Blind Visual Quality Assessment of 4K Content

The 4K content can deliver a more immersive visual experience to consume...

Capturing Co-existing Distortions in User-Generated Content for No-reference Video Quality Assessment

Video Quality Assessment (VQA), which aims to predict the perceptual qua...

Blindly Assess Quality of In-the-Wild Videos via Quality-aware Pre-training and Motion Perception

Perceptual quality assessment of the videos acquired in the wilds is of ...

HVS Revisited: A Comprehensive Video Quality Assessment Framework

Video quality is a primary concern for video service providers. In recen...

Zoom-VQA: Patches, Frames and Clips Integration for Video Quality Assessment

Video quality assessment (VQA) aims to simulate the human perception of ...

Ada-DQA: Adaptive Diverse Quality-aware Feature Acquisition for Video Quality Assessment

Video quality assessment (VQA) has attracted growing attention in recent...

Please sign up or login with your details

Forgot password? Click here to reset