Safe Feature Elimination for Non-Negativity Constrained Convex Optimization

by   James Folberth, et al.

Inspired by recent work on safe feature elimination for 1-norm regularized least-squares, we develop strategies to eliminate features from convex optimization problems with non-negativity constraints. Our strategy is safe in the sense that it will only remove features/coordinates from the problem when they are guaranteed to be zero at a solution. To perform feature elimination we use an accurate, but not optimal, primal-dual feasible pair, making our methods robust and able to be used on ill-conditioned problems. We supplement our feature elimination problem with a method to construct an accurate dual feasible point from an accurate primal feasible point; this allows us to use a first-order method to find an accurate primal feasible point, then use that point to construct an accurate dual feasible point and perform feature elimination. Under reasonable conditions, our feature elimination strategy will eventually eliminate all zero features from the problem. As an application of our methods we show how safe feature elimination can be used to robustly certify the uniqueness of non-negative least-squares (NNLS) problems. We give numerical examples on a well-conditioned synthetic NNLS problem and a on set of 40000 extremely ill-conditioned NNLS problems arising in a microscopy application.


page 5

page 7


Comparing different subgradient methods for solving convex optimization problems with functional constraints

We provide a dual subgradient method and a primal-dual subgradient metho...

A Primal Dual Active Set with Continuation Algorithm for the ℓ^0-Regularized Optimization Problem

We develop a primal dual active set with continuation algorithm for solv...

Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

Optimization methods are at the core of many problems in signal/image pr...

Some Primal-Dual Theory for Subgradient Methods for Strongly Convex Optimization

We consider (stochastic) subgradient methods for strongly convex but pot...

Richardson Approach or Direct Methods? What to Apply in the Ill-Conditioned Least Squares Problem

This report shows on real data that the direct methods such as LDL decom...

Primal-dual residual networks

In this work, we propose a deep neural network architecture motivated by...

Quantifier Elimination for Statistical Problems

Recent improvement on Tarski's procedure for quantifier elimination in t...

Please sign up or login with your details

Forgot password? Click here to reset