Vector Quantization by Minimizing Kullback-Leibler Divergence

01/30/2015
by   Lan Yang, et al.
0

This paper proposes a new method for vector quantization by minimizing the Kullback-Leibler Divergence between the class label distributions over the quantization inputs, which are original vectors, and the output, which is the quantization subsets of the vector set. In this way, the vector quantization output can keep as much information of the class label as possible. An objective function is constructed and we also developed an iterative algorithm to minimize it. The new method is evaluated on bag-of-features based image classification problem.

READ FULL TEXT
research
02/10/2022

Quantization in Layer's Input is Matter

In this paper, we will show that the quantization in layer's input is mo...
research
03/22/2023

Posthoc Interpretation via Quantization

In this paper, we introduce a new approach, called "Posthoc Interpretati...
research
03/01/2018

Vector Quantization as Sparse Least Square Optimization

Vector quantization aims to form new vectors/matrices with shared values...
research
10/26/2009

Parallelization of the LBG Vector Quantization Algorithm for Shared Memory Systems

This paper proposes a parallel approach for the Vector Quantization (VQ)...
research
01/04/2007

On the use of self-organizing maps to accelerate vector quantization

Self-organizing maps (SOM) are widely used for their topology preservati...
research
05/14/2019

Comparison-limited Vector Quantization

A variation of the classic vector quantization problem is considered, in...
research
01/12/2020

Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers

We consider the problem of learning a neural network classifier. Under t...

Please sign up or login with your details

Forgot password? Click here to reset