参考文献

优质
小牛编辑
112浏览
2023-12-01

Python / NumPy

[1] Bill Lubanovic. Introducing Python1 . O'Reilly Media, 2014.

1 中文版名为《Python 语言及其应用》,梁杰等译,人民邮电出版社 2015 年出版。—编者注

[2] Wes McKinney. Python for Data Analysis2 . O'Reilly Media.

2 中文版名为《利用 Python 进行数据分析》,唐学韬译,机械工业出版社 2013 年出版。—编者注

[3] Scipy Lecture Notes.

计算图(误差反向传播法)

[4] Andrej Karpathy's blog Hacker's guide to Neural Networks.

深度学习的在线课程(资料)

[5] CS231n: Convolutional Neural Networks for Visual Recognition.

参数的更新方法

[6] John Duchi, Elad Hazan, and Yoram Singer(2011): Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research 12, Jul (2011), 2121 - 2159.

[7] Tieleman, T., & Hinton, G.(2012): Lecture 6.5―RMSProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning.

[8] Diederik Kingma and Jimmy Ba.(2014): Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs] (December 2014).

权重参数的初始值

[9] Xavier Glorot and Yoshua Bengio(2010): Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS2010). Society for Artificial Intelligence and Statistics.

[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun(2015): Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In 1026 - 1034.

Batch Normalization / Dropout

[11] Sergey Ioffe and Christian Szegedy(2015): Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 [cs] (February 2015).

[12] Dmytro Mishkin and Jiri Matas(2015): All you need is a good init. arXiv:1511.06422 [cs] (November 2015).

[13] Frederik Kratzert's blog Understanding the backward pass through Batch Normalization Layer.

[14] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov(2014): Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, pages 1929 - 1958, 2014.

超参数的最优化

[15] James Bergstra and Yoshua Bengio(2012): Random Search for Hyper- Parameter Optimization. Journal of Machine Learning Research 13, Feb (2012), 281 - 305.

[16] Jasper Snoek, Hugo Larochelle, and Ryan P. Adams(2012): Practical Bayesian Optimization of Machine Learning Algorithms. In F. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger, eds. Advances in Neural Information Processing Systems 25. Curran Associates, Inc., 2951 - 2959.

CNN 的可视化

[17] Matthew D. Zeiler and Rob Fergus(2014): Visualizing and Understanding Convolutional Networks. In David Fleet, Tomas Pajdla, Bernt Schiele, & Tinne Tuytelaars, eds. Computer Vision - ECCV 2014. Lecture Notes in Computer Science. Springer International Publishing, 818 - 833.

[18] A. Mahendran and A. Vedaldi(2015): Understanding deep image representations by inverting them. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5188 - 5196.

[19] Donglai Wei, Bolei Zhou, Antonio Torralba, William T. Freeman (2015): mNeuron: A Matlab Plugin to Visualize Neurons from Deep Models.

具有代表性的网格

[20] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner(1998): Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (November 1998), 2278 - 2324.

[21] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton(2012): ImageNet Classification with Deep Convolutional Neural Networks. In F. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger, eds. Advances in Neural Information Processing Systems 25. Curran Associates, Inc., 1097 - 1105.

[22] Karen Simonyan and Andrew Zisserma n(2 014): Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv:1409.1556 [cs] (September 2014).

[23] Christian Szegedy et al(2015): Going Deeper With Convolutions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun(2015): Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs] (December 2015).

数据集

[25] J. Deng, W. Dong, R. Socher, L.J. Li, Kai Li, and Li Fei-Fei(2009): ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009. 248 - 255.

计算的高速化

[26] Jia Yangqing(2014): Learning Semantic Image Representations at a Large Scale. PhD thesis, EECS Department, University of California, Berkeley, May 2014.

[27] NVIDIA blog NVIDIA Propels Deep Learning with TITAN X, New DIGITS Training System and DevBox.

[28] Google Research Blog Announcing TensorFlow 0.8 - now with distributed computing support!.

[29] Martín Abadi et al(2016): TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv:1603.04467 [cs] (March 2016).

[30] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan(2015): Deep learning with limited numerical precision. CoRR, abs/1502.02551 392 (2015).

[31] Matthieu Courbariaux and Yoshua Bengio(2016): Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. arXiv preprint arXiv:1602.02830 (2016).

MNIST 数据集识别精度排行榜及最高精度的方法

[32] Rodrigo Benenson's blog Classification datasets results.

[33] Li Wan, Matthew Zeiler, Sixin Zhang, Yann L. Cun, and Rob Fergus (2013): Regularization of Neural Networks using DropConnect. In Sanjoy Dasgupta & David McAllester, eds. Proceedings of the 30th International Conference on Machine Learning (ICML2013). JMLR Workshop and Conference Proceedings, 1058 - 1066.

深度学习的应用

[34] Visual Object Classes Challenge 2012 VO(2012).

[35] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik(2014): Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In 580 - 587.

[36] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun(2015): Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, & R. Garnett, eds. Advances in Neural Information Processing Systems 28. Curran Associates, Inc., 91 - 99.

[37] Jonathan Long, Evan Shelhamer, and Trevor Darrell(2015): Fully Convolutional Networks for Semantic Segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[38] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan (2015): Show and Tell: A Neural Image Caption Generator. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[39] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge(2015): A Neural Algorithm of Artistic Style. arXiv:1508.06576 [cs, q-bio] (August 2015).

[40] neural-style Torch implementation of neural style algorithm.

[41] Alec Radford, Luke Metz, and Soumith Chintala(2015): Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv:1511.06434 [cs] (November 2015).

[42] Vijay Badrinarayanan, Kendall, and Roberto Cipolla(2015): SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. arXiv preprint arXiv:1511.00561 (2015).

[43] SegNet Demo page.

[44] Volodymyr Mnih et al(2015): Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529 - 533.

[45] David Silver et al(2016): Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (2016), 484 - 489.