Learning-Deep-Learning

Aggregated Residual Transformations for Deep Neural Networks (ResNeXt)

Feb 2019

tl;dr: Introduce a new dimension Cardinality besides depth (#layers) and width (#channels) of CNN. Spliting the computation from width to cardinality leads to better performance.

Overall impression

The paper exploits the multi-path (split-transform-merge) strategy, simplifies the design rules and introduces a new dimension. The better performnace than ResNet is a strong tesimony. This should be compared to other papers like Xception and mobileNet to see how they fare with each other.

Key ideas

Technical details

Notes