Speeding-up Graphical Model Optimization via a Coarse-to-fine Cascade of Pruning Classifiers

09/15/2014
by   B. Conejo, et al.
0

We propose a general and versatile framework that significantly speeds-up graphical model optimization while maintaining an excellent solution accuracy. The proposed approach relies on a multi-scale pruning scheme that is able to progressively reduce the solution space by use of a novel strategy based on a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our framework constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing the MRF.

READ FULL TEXT
research
03/09/2022

CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction

Vision transformer (ViT) has achieved competitive accuracy on a variety ...
research
07/22/2017

Coarse-to-Fine Lifted MAP Inference in Computer Vision

There is a vast body of theoretical research on lifted inference in prob...
research
12/21/2018

Cascaded Coarse-to-Fine Deep Kernel Networks for Efficient Satellite Image Change Detection

Deep networks are nowadays becoming popular in many computer vision and ...
research
09/23/2016

Funnel-Structured Cascade for Multi-View Face Detection with Alignment-Awareness

Multi-view face detection in open environment is a challenging task due ...
research
03/19/2021

Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls for Network Pruning

We report, for the first time, on the cascade weight shedding phenomenon...
research
11/01/2018

Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices

We introduce hybrid pruning which combines both coarse-grained channel a...

Please sign up or login with your details

Forgot password? Click here to reset