Learning-Deep-Learning

LcP: Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks

(NIPS 2018 talk for ML on device)

May 2019

tl;dr: Layer-wise pruning, but with layer-compensated loss.

Overall impression

Previous method approximates the pruning loss increase with the L1 or L2 of the pruned filter. This is not true. LcP first approximates the layer-wise error compensation and then uses naive pruning (global greedy pruning algorithms) to prune network.

Key ideas

Technical details

Notes