| Supervised | Artificial Neural Networks | Used for Regression & Classification |
|---|---|---|
| Convolutional Neural Networks | Used for Computer Vision | |
| Recurrent Neural Networks | Used for Time Series Analysis |
| Unsupervised | Self-Organizing Maps | Used for Feature Detection |
|---|---|---|
| Deep Boltzmann Machines | Used for Recommendation Systems | |
| AutoEncoders | Used for Recommendation Systems |
Visual Representation of a Auto Encoder:

As the name suggests an Auto Encoder encodes itself.
It takes some sort of inputs, put some through a hidden layer, and then it gets outputs, but it aims for the outputs to be identical to the inputs.
Auto Encoders are not a pure type of Unsupervised Deep Learning Algorithm.
They are actually a Self-Supervised Deep Learning Algorithm, because they are comparing to something on the end.
The hidden layer is also called as the coding layer or the bottleneck.
Uses:
By Malte Skarupke (2016)

STEP 1: We start with an array where the lines (the observations) correspond to the users and the columns (the features) correspond to the movies. Each cell (u, i) contains the rating (from 1 to 5, 0 if no rating) of the movie i by the user u.
STEP 2: The first user goes into the network. The input vector x = (r1, r2, ..., rm) contains all its ratings for all the movies.
STEP 3: The input vector x is encoded into a vector z of lower dimensions by a mapping function f (e.g: sigmoid function):
z = f(Wx + b)
where
W is the vector of input weights and b the bias
STEP 4: z is then decoded into the output vector y of same dimensions as x, aiming to replicate the input vector x.
STEP 5: The reconstruction error d(x, y) = ||x-y|| is computed. The goal is to minimize it.
STEP 6: Back-Propagation: from right to left, the error is back-propagated. The weights are updated according to how much they are responsible for the error. The learning rate decides by how much we update the weights.
STEP 7: Repeat Steps 1 to 6 and update the weights after each observation (Reinforcement Learning). Or:
Repeat Steps 1 to 6 but update the weights only after a batch of observations (Batch Learning).
STEP 8: When the whole training set passed through the ANN, that makes an epoch. Redo more epochs.
By Francois Chollet (2016)


By Chris McCormick (2014)

By Eric Wilkinson (2014)

By Alireza Makhzani et al. (2014)

Denoising Autoencoder is another regularization technique, which is here to combat the problem of when we have more nodes in the hidden layers than in the input layer.
Denoising Autoencoder is a Stochastic Autoencoder.
It depends on the random generation or random selection of which values are going to be zeroed out and so it becomes a Stochastic type of Autoencoder.
By Pascal Vincent et al. (2008)


By Salah Rifai et al. (2011)


By Pascal Vincent et al. (2010)


These are Restricted Boltzmann Machines (RBMs) that are Stacked. Then they are pre-trained layer by layer. Then they are unrolled. Then they’re fine tuned with back propagation. So then you do get directionality in your network and then you have back propagation.
In essence, a Deep Autoencoder comes from RBMs.
Stacked Autoencoders are just normal autoencoders stacked. A Deep Autoencoder is RBMs stacked on top of each other and then certain things are done with them in order to achieve a autoencoding mechanism.
By Geoffrey Hinton et al. (2006)

| «Previous |