Tuesday, April 29, 2014

[keep updating] Deep Learning

This is the buzzword in machine learning community recently. Let's start from a 101 article. http://markus.com/deep-learning-101/

Begin to read the tutorial:
http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial
Chinese version:
http://deeplearning.stanford.edu/wiki/index.php/UFLDL%E6%95%99%E7%A8%8B
and another one:
http://deeplearning.net/tutorial/gettingstarted.html#


open source code set:
http://deeplearning.net/software/theano/
introduce in Chinese
http://www.52ml.net/6.html
http://blog.csdn.net/mysee1989/article/details/11992535


先说一点直观感受,deep learning用了一段时间。
Motivation 是人的感知系统是hierarchical的,以视觉系统为例,底层的neuron负责检测local low-level features,比如边界,角点,纹理。高层的neuron负责在这些low-level特征基础上提取高级特征,最后形成high-level concept。
Neural Network在七八十年代红极一时。但后来逐渐被模型更简单的SVM取代。原因有:a. NN的parameter太多优化起来很难。b. 层数多了之后容易overfitting.
计算能力的爆炸性增长解决掉第一个问题(along with better optimization algorithm)第二个问题是优化过程中加入一些regularization来克服(e.g. sparsity).

convolutional neural network (CNN) 考虑到了spatial 局部性,底层向上层 计算时只考虑一定领域内的值,training的结果也就变成多个convolutional filter.好处是计算效率提高而且training的modle复杂度降低了。伴随来的是另一个概念max-pooling.可以看作是一个非线性的down-sampling. 一定领域内取最大值传递到上一层,领域之间是不重叠的。
max-pooling的好处是使得算法更robust,代价是丢失信息。

No comments:

Post a Comment