This is an article for summary. Because I am interested in the development of CNN architectures.
- ImageNet: It is not a net but a visual database developed by Feifei Li, which boosts the development of the Computer Vision. The ImageNet project runs a contest called: ImageNet Large Scale Visual Recognition Challenge (ILSVRC), we can easily tell what the content of this contest is.
- Below is a graph summarizing the famous models showed in this contest, we can see AlexNet firstly uses deep network, which is 8 layers, and VGG uses 19 layers, GoogleNet uses 22 layers, ResNet is the best, which uses 152 layers!
3. LeNet-5(1998)
This is a pioneering 7-level convolutional network developed by LeCun et al at 1998. LeNet-5 classifies digits, and was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel greyscale input images. The ability to process higher resolution images requires larger and more convolutional layers, so this technique is constrained by the availability of computing resources. The architecture is like that:
4. AlexNet(2012)
It is like LeNet, the main differences are that it was deeper, with more filters per layer, and with stacked convolutional layers. It consisted 11x11, 5x5,3x3, convolutions, max pooling, dropout, data augmentation, ReLU activations, SGD with momentum. It attached ReLU activations after every convolutional and fully-connected layer.
It reduces the error rate from 25.8% to 16.4%.
AlexNet was trained for 6 days simultaneously on two Nvidia Geforce GTX 580 GPUs which is the reason for why their network is split into two pipelines. AlexNet was designed by the SuperVision group, consisting of Alex Krizhevsky, Geoffrey Hinton, and Ilya Sutskever. See the picture below:
5. ZFNet(2013)
It was mostly an achievement by adjust the hyper-parameters of AlexNet while maintaining the same structure with additional Deep Learning elements.
6. GoogleNet/Inception(2014)
See this : Going Deeper with Convolutions
The name Inception comes from the meme "we need to go deeper". The best thing in this article is that it uses Inception modules, This cascaded cross channel parameteric pooling structure allows complex and learnable interactions of cross channel information.
GoogLeNet is a special case of Inception architecture. It is like that:
The winner of the ILSVRC 2014 competition was GoogleNet(a.k.a. Inception V1) from Google. It achieved a error rate of 6.67%! This was very close to human level performance which the organizers of the challenge were now forced to evaluate. As it turns out, this was actually rather hard to do and required some human training in order to beat GoogLeNets accuracy. After a few days of training, the human expert (Andrej Karpathy) was able to achieve a error rate of 5.1%(single model) and 3.6%(ensemble). The network used a CNN inspired by LeNet but implemented a novel element which is called an inception module. It used batch normalization, image distortions and RMSprop. This module is based on several very small convolutions in order to drastically reduce the number of parameters. Their architecture consisted of a 22 layer deep CNN but reduced the number of parameters from 60 million (AlexNet) to 4 million.
A rough estimate suggests that the GoogLeNet network could be trained to convergence using few high-end GPUs within a week, the main limitation being the memory usage.
7. VGGNet(2014)
The runner-up at the ILSVRC 2014 competition is called VGGNet by the community and was developed by Simonyan and Zisserman . VGGNet consists of 16 convolutional layers and is very appealing because of its very uniform architecture. Similar to AlexNet, only 3x3 convolutions, but lots of filters. Trained on 4 GPUs for 2–3 weeks. It is currently the most preferred choice in the community for extracting features from images. The weight configuration of the VGGNet is publicly available and has been used in many other applications and challenges as a baseline feature extractor. However, VGGNet consists of 138 million parameters, which can be a bit challenging to handle.
8. ResNet(2015)
At last, at the ILSVRC 2015, the so-called Residual Neural Network (ResNet) by Kaiming He et al introduced a novel architecture with “skip connections” and features heavy batch normalization. Such skip connections are also known as gated units or gated recurrent units and have a strong similarity to recent successful elements applied in RNNs. Thanks to this technique they were able to train a NN with 152 layers while still having lower complexity than VGGNet. It achieves a error rate of 3.57% which beats human-level performance on this dataset.
See: Deep Residual Learning for Image Recognition
9. Summary
10. References
3.为什么GoogleNet中的Inception Module使用1*1 convolutions?