Self-paced Data Augmentation for Training Neural Networks

作者: Ryo Karakida , Hideki Asoh , Tomoumi Takase

DOI:

关键词:

摘要: … for data augmentation when training a neural network. The … works relative to curriculum learning and desirable changes … Here, we used a convolutional neural network (CNN) with a […

参考文章(41)
Léon Bottou, Large-Scale Machine Learning with Stochastic Gradient Descent Proceedings of COMPSTAT'2010. pp. 177- 186 ,(2010) , 10.1007/978-3-7908-2604-3_16
Christian Szegedy, Sergey Ioffe, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift international conference on machine learning. ,vol. 1, pp. 448- 456 ,(2015)
Kai A. Krueger, Peter Dayan, Flexible shaping: how learning in small steps helps. Cognition. ,vol. 110, pp. 380- 394 ,(2009) , 10.1016/J.COGNITION.2008.11.014
S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by Simulated Annealing Science. ,vol. 220, pp. 671- 680 ,(1983) , 10.1126/SCIENCE.220.4598.671
Sepp Hochreiter, Jürgen Schmidhuber, Long short-term memory Neural Computation. ,vol. 9, pp. 1735- 1780 ,(1997) , 10.1162/NECO.1997.9.8.1735
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, ImageNet: A large-scale hierarchical image database computer vision and pattern recognition. pp. 248- 255 ,(2009) , 10.1109/CVPR.2009.5206848
Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition Proceedings of the IEEE. ,vol. 86, pp. 2278- 2324 ,(1998) , 10.1109/5.726791
Andrew Y. Ng, Honglak Lee, Adam Coates, An analysis of single-layer networks in unsupervised feature learning international conference on artificial intelligence and statistics. ,vol. 15, pp. 215- 223 ,(2011)
Daphne Koller, M. P. Kumar, Benjamin Packer, Self-Paced Learning for Latent Variable Models neural information processing systems. ,vol. 23, pp. 1189- 1197 ,(2010)