Bibliography
[1] “Numpy reference.” https://docs.scipy.org/doc/numpy-1.13.0/reference/, 2017.
[2] “Scipy reference.” https://docs.scipy.org/doc/scipy-1.0.0/reference/, 2017.
[3] “Matplotlib pyplot reference.” https://matplotlib.org/api/pyplot_api.html, 2017.
[4] J. Johnson, “Python numpy tutorial.” http://cs231n.github.io/python-numpy-tutorial/,
2017.
[5] “Luma coding in video systems.” https://en.wikipedia.org/wiki/Grayscale#Luma_
coding_in_video_systems, 2017.
[6] Wikipedia, “DFT matrix — Wikipedia, the free encyclopedia.” http://en.wikipedia.org/w/
index.php?title=DFT%20matrix&oldid=811427639, 2017.
[7] J. L. Bentley, “Multidimensional binary search trees used for associative searching,” Commu-
nications of the ACM, vol. 18, no. 9, pp. 509–517, 1975.
[8] D. R. Cox, “The regression analysis of binary sequences,” Journal of the Royal Statistical
Society. Series B (Methodological), pp. 215–242, 1958.
[9] M. Aly, “Survey on multiclass classification methods,” Neural Networks, 2005.
[10] T. P. Minka, “A comparison of numerical optimizers for logistic regression,” 2003.
[11] C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–
297, 1995.
[12] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Transac-
tions on Intelligent Systems and Technology, vol. 2, pp. 27:1–27:27, 2011. Software available at
http://www.csie.ntu.edu.tw/
~
cjlin/libsvm.
[13] H. Yu and S. Kim, “SVM tutorial: classification, regression and ranking,” in Handbook of
Natural computing, pp. 479–506, Springer, 2012.
[14] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016. http://www.
deeplearningbook.org.
[15] J. Johnson, “Backpropagation for a Linear Layer.” http://cs231n.stanford.edu/handouts/
linear-backprop.pdf. [Online; accessed 22-Dec-2017].
[16] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural
networks,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence
and Statistics, pp. 249–256, 2010.
81
BIBLIOGRAPHY
[17] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in
Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814,
2010.
[18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Pro-
ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778,
2016.
[19] A. Krizhevsky, “Learning multiple layers of features from tiny images,” Technical Report, 2009.
[20] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional
neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
[21] A. Lavin and S. Gray, “Fast algorithms for convolutional neural networks,” in Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4013–4021, 2016.
[22] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recog-
nition,” arXiv preprint arXiv:1409.1556, 2014.
[23] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison,
L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” 2017.
[24] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy,
A. Khosla, M. Bernstein, et al., “Imagenet large scale visual recognition challenge,” Interna-
tional Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015.
[25] “iNaturalist challenge at FGVC 2017.” https://www.kaggle.com/c/
inaturalist-challenge-at-fgvc-2017. Accessed: 2018-04-11.
[26] E. Learned-Miller, G. B. Huang, A. RoyChowdhury, H. Li, and G. Hua, “Labeled faces in the
wild: A survey,” in Advances in face detection and facial image analysis, pp. 189–248, Springer,
2016.
[27] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning deep features for scene
recognition using places database,” in Advances in neural information processing systems,
pp. 487–495, 2014.
[28] “iMaterialist challenge at FGVC 2018.” https://www.kaggle.com/c/
imaterialist-challenge-furniture-2018. Accessed: 2018-04-11.
[29] L. Fei-Fei, R. Fergus, and P. Perona, “Learning generative visual models from few training
examples: An incremental bayesian approach tested on 101 object categories,” Computer vision
and Image understanding, vol. 106, no. 1, pp. 59–70, 2007.
[30] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional
networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
2017.
[31] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and
A. Rabinovich, “Going deeper with convolutions,” IEEE International Conference on Computer
Vision (CVPR), pp. 1–9, 2015.
[32] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9,
no. 8, pp. 1735–1780, 1997.
82
BIBLIOGRAPHY
[33] K. Cho, B. Van Merri¨enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Ben-
gio, “Learning phrase representations using rnn encoder-decoder for statistical machine trans-
lation,” arXiv preprint arXiv:1406.1078, 2014.
[34] B. Pang, L. Lee, and S. Vaithyanathan, “Thumbs up?: sentiment classification using machine
learning techniques,” in Proceedings of the ACL-02 conference on Empirical methods in natural
language processing-Volume 10, pp. 79–86, Association for Computational Linguistics, 2002.
[35] V. Metsis, I. Androutsopoulos, and G. Paliouras, “Spam filtering with naive bayes-which naive
bayes?,” in Proceedings of the 3rd Conference on Email and Anti-Spam (CEAS 2006), 2006.
[36] X. Li and D. Roth, “Learning question classifiers,” in Proceedings of the 19th international
conference on Computational linguistics-Volume 1, pp. 1–7, Association for Computational Lin-
guistics, 2002.
[37] K. Lang, “Newsweeder: Learning to filter netnews,” in Proceedings of the Twelfth International
Conference on Machine Learning, pp. 331–339, 1995.
[38] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint
arXiv:1412.6980, 2014.
83