Learning A Deep ℓ_∞ Encoder for Hashing

04/06/2016
by   Zhangyang Wang, et al.
0

We investigate the ℓ_∞-constrained representation which demonstrates robustness to quantization errors, utilizing the tool of deep learning. Based on the Alternating Direction Method of Multipliers (ADMM), we formulate the original convex minimization problem as a feed-forward neural network, named Deep ℓ_∞ Encoder, by introducing the novel Bounded Linear Unit (BLU) neuron and modeling the Lagrange multipliers as network biases. Such a structural prior acts as an effective network regularization, and facilitates the model initialization. We then investigate the effective use of the proposed model in the application of hashing, by coupling the proposed encoders under a supervised pairwise loss, to develop a Deep Siamese ℓ_∞ Network, which can be optimized from end to end. Extensive experiments demonstrate the impressive performances of the proposed model. We also provide an in-depth analysis of its behaviors against the competitors.

READ FULL TEXT
research
09/01/2015

Learning Deep ℓ_0 Encoders

Despite its nonconvex nature, ℓ_0 sparse approximation is desirable in m...
research
09/01/2015

Learning A Task-Specific Deep Architecture For Clustering

While sparse coding-based clustering methods have shown to be successful...
research
07/24/2017

Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM

Although deep learning models are highly effective for various learning ...
research
11/20/2020

DeepPhaseCut: Deep Relaxation in Phase for Unsupervised Fourier Phase Retrieval

Fourier phase retrieval is a classical problem of restoring a signal onl...
research
09/06/2020

An Analysis of Alternating Direction Method of Multipliers for Feed-forward Neural Networks

In this work, we present a hardware compatible neural network training a...
research
05/21/2020

Hyperspectral Unmixing Network Inspired by Unfolding an Optimization Problem

The hyperspectral image (HSI) unmixing task is essentially an inverse pr...
research
11/03/2019

Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization

Spiking neural network is an important family of models to emulate the b...

Please sign up or login with your details

Forgot password? Click here to reset