Learning-Deep-Learning

Guided backprop: Striving for Simplicity: The All Convolutional Net

September 2019

tl;dr: Guided prop for visualizing CNN efficiently. Also, max pooling can be replaced by conv with larger strides.

Overall impression

Backprop visualizes contribution of pixels to a classification results via backprop, but mask out the negative gradient. This leads to less noise in the visualized saliency map as compared to vanilla backprop.

The idea is summarized well in this blog post by the author of FlashTorch.

The idea can be combined with class activation map (CAM) or grad-CAM as well. But as shown in pytorch-cnn-visualizations, the difference between guided backprop (GB) and grad-CAM is not that big.

Key ideas

Technical details

Notes