Matrix Backpropagation for Deep Networks with Structured Layers
C. Ionescu, O. Vantzos and C. Sminchisescu (ICCV 2015)
Description
Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive fields, and from the simplicity and scalability of the gradient-descent training procedure based on backpropagation. An open problem is the inclusion of layers that perform global, structured matrix computations like segmentation (e.g. normalized cuts) or higher-order pooling (e.g. log-tangent space metrics defined over the manifold of symmetric positive definite matrices) while preserving the validity and efficiency of an end-to-end deep training framework. In this paper we propose a sound mathematical apparatus to formally integrate global structured computation into deep computation architectures. At the heart of our methodology is the development of the theory and practice of backpropagation that generalizes to the calculus of adjoint matrix variations. We perform segmentation experiments using the BSDS and MSCOCO benchmarks and demonstrate that deep networks relying on second-order pooling and normalized cuts layers, trained end-to-end using matrix backpropagation, outperform counterparts that do not take advantage of such global layers.
The ICCV 2015 article can be downloaded from here. A revised and extended arXiv version with additional material is available from here (latest version : v2 from December 2014).
Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive fields, and from the simplicity and scalability of the gradient-descent training procedure based on backpropagation. An open problem is the inclusion of layers that perform global, structured matrix computations like segmentation (e.g. normalized cuts) or higher-order pooling (e.g. log-tangent space metrics defined over the manifold of symmetric positive definite matrices) while preserving the validity and efficiency of an end-to-end deep training framework. In this paper we propose a sound mathematical apparatus to formally integrate global structured computation into deep computation architectures. At the heart of our methodology is the development of the theory and practice of backpropagation that generalizes to the calculus of adjoint matrix variations. We perform segmentation experiments using the BSDS and MSCOCO benchmarks and demonstrate that deep networks relying on second-order pooling and normalized cuts layers, trained end-to-end using matrix backpropagation, outperform counterparts that do not take advantage of such global layers.
The ICCV 2015 article can be downloaded from here. A revised and extended arXiv version with additional material is available from here (latest version : v2 from December 2014).
Code and Results
Matlab code is available to reproduce the results in the article. Feel free to contact the authors with questions, suggestions and/or bug reports.
Comparative results for different methods presented in the paper are shown on a select set of images below.
Matlab code is available to reproduce the results in the article. Feel free to contact the authors with questions, suggestions and/or bug reports.
Comparative results for different methods presented in the paper are shown on a select set of images below.
Image NCuts-AlexNet DeepNCuts-AlexNet NCuts-VGG(4) DeepNCuts-VGG(4) NCuts-VGG(5) DeepNCuts-VGG(5) GT
Acknowledgements
This work was partly supported by CNCS-UEFISCDI under CT-ERC-2012-1, PCE-2011-3- 0438, JRP-RO-FR-2014-16. We thank J. Carreira for helpful discussions and NVIDIA for a graphics board donation.
This work was partly supported by CNCS-UEFISCDI under CT-ERC-2012-1, PCE-2011-3- 0438, JRP-RO-FR-2014-16. We thank J. Carreira for helpful discussions and NVIDIA for a graphics board donation.