Learning Time/Memory-Efficient Deep Architectures with Budgeted Super Networks

作者: Ludovic Denoyer , Tom Veniat

DOI:

关键词:

摘要: We propose to focus on the problem of discovering neural network architectures efficient in terms both prediction quality and cost. For instance, our approach is able solve following tasks: learn a predict well less than 100 milliseconds or an model that fits 50 Mb memory. Our contribution novel family models called Budgeted Super Networks (BSN). They are learned using gradient descent techniques applied budgeted learning objective function which integrates maximum authorized cost, while making no assumption nature this present set experiments computer vision problems analyze ability technique deal with three different costs: computation memory consumption cost distributed particularly show can discover have better accuracy ResNet Convolutional Neural Fabrics CIFAR-10 CIFAR-100, at lower

参考文章(4)
Matthew Zeiler, Rob Fergus, Li Wan, Yann Le Cun, Sixin Zhang, Regularization of Neural Networks using DropConnect international conference on machine learning. pp. 1058- 1066 ,(2013)
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, Distilling the Knowledge in a Neural Network arXiv: Machine Learning. ,(2015)
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Deep Residual Learning for Image Recognition computer vision and pattern recognition. pp. 770- 778 ,(2016) , 10.1109/CVPR.2016.90
Marco Andreetto, Tobias Weyand, Hartwig Adam, Menglong Zhu, Dmitry Kalenichenko, Bo Chen, Andrew G. Howard, Weijun Wang, MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications arXiv: Computer Vision and Pattern Recognition. ,(2017)