Autoencoder based image compression: can the learning be quantization independent?

02/23/2018
by   Thierry Dumas, et al.
0

This paper explores the problem of learning transforms for image compression via autoencoders. Usually, the rate-distortion performances of image compression are tuned by varying the quantization step size. In the case of autoen-coders, this in principle would require learning one transform per rate-distortion point at a given quantization step size. Here, we show that comparable performances can be obtained with a unique learned transform. The different rate-distortion points are then reached by varying the quantization step size at test time. This approach saves a lot of training time.

READ FULL TEXT
research
09/11/2019

Variable Rate Deep Image Compression With a Conditional Autoencoder

In this paper, we propose a novel variable-rate learned image compressio...
research
09/02/2020

Transform Quantization for CNN Compression

In this paper, we compress convolutional neural network (CNN) weights po...
research
05/01/2019

Learned Image Compression with Soft Bit-based Rate-Distortion Optimization

This paper introduces the notion of soft bits to address the rate-distor...
research
02/21/2019

Learned Step Size Quantization

We present here Learned Step Size Quantization, a method for training de...
research
05/27/2019

Differentiable Quantization of Deep Neural Networks

We propose differentiable quantization (DQ) for efficient deep neural ne...
research
09/12/2013

Progressive Compression of 3D Objects with an Adaptive Quantization

This paper presents a new progressive compression method for triangular ...
research
07/23/2020

Improving distribution and flexible quantization for DCT coefficients

While it is a common knowledge that AC coefficients of Fourier-related t...

Please sign up or login with your details

Forgot password? Click here to reset