Geoffrey Hinton who has been called “The God Father of Deep Learning” published an article last week. Hinton is an important researcher in the machine learning universe, so when he published an article people were quite excited !
Who is he ?
Hinton is one of the researchers who worked on backpropagation (the original article here from 1986). Then backpropagation became the method the most used to train a deep learning model.
But more recently he explained that he doesn’t believe that backpropagation is the best way to do AI. He thinks that it is a method that works but not the best one. More about his point of view here.
Other concerns appears recently about the fact that by changing a few pixel in an image could totally biased a deep learning classification model. Several papers have been published on the subject here, here, or here !
What is the article about ?
The article aims to reduce the influence of single pixel and to preserve the spatial relationship of elements. In a nutshell : to be more robust.
The article introduces a capsule model called CapsNet containing 3 layers : it has two convolutional layers and one dense layer. This is the architecture of CapsNet :
a) Convolutional layer
The first layer is a traditional convolutional layer.
b) Capsule layer
The second layer is a convolutional capsule layer containing 32 channels of 8D capsules. A capsule layer is basically a layer containing other layers. We apply a convolutional operation 32 times and concatenate all these layers.
c) Routing algorithm and Digitcaps
Then the final layer is Digitcaps, it use routing-by-agreement algorithm. Hinton replaced the MaxPooling by a routing algorithm. Instead of squashing the output of a unit it squashed the entire vector.
Hinto achived the state-of-art performance on MNIST and claims that his algorithm is way better than a classic convolutional network on overlapping digits. He gives exemples on these kinds of digits :
It was a quick overview of this exciting new paper of Hinton, I am looking forward to use the implementations on some new datasets ! 🙂