1. Masayuki Tanaka
Aug. 17, 2015
Back-Propagation Algorithm for
Deep Neural Networks and
Contradictive Diverse Learning for
Restricted Boltzmann Machine
2. Outline
1. Examples of Deep Learning
2. RBM to Deep NN
3. Deep Neural Network (Deep NN)
– Back-Propagation (Supervised Learning)
4. Restricted Boltzmann Machine (RBM)
– Mathematics, Probabilistic Model and Inference Model
– Pre-training by Contradictive Diverse Learning
(Unsupervised Learning)
5. Inference Model with Distribution
1
http://bit.ly/dnnicpr2014
3. Deep learning
2
– MNIST (handwritten digits benchmark)
MNIST
Top performance in character recognition
8. Pros and Cons of Deep NN
7
Input
layer
Output
layer
Deep NN Until a few years ago…
1. Tend to be overfitting
2. Learning information does not reach to
the lower layer
・Pre-training with RBM
・Big data
Image net
More than 1,5 M: Labeled images
http://www.image-net.org/
Labeled Faces in the Wild
More than 10,000: Face images
http://vis-www.cs.umass.edu/lfw/
High-performance network
9. Outline
1. Examples of Deep NNs
2. RBM to Deep NN
3. Deep Neural Network (Deep NN)
– Back-Propagation (Supervised Learning)
4. Restricted Boltzmann Machine (RBM)
– Mathematics, Probabilistic Model and Inference Model
– Pre-training by Contradictive Diverse Learning
(Unsupervised Learning)
5. Inference Model with Distribution
8
http://bit.ly/dnnicpr2014
10. Single Layer Neural Network
9
Input layer
𝑣1 𝑣2 𝑣3
ℎ
Output
𝑤1 𝑤2 𝑤3
ℎ = 𝜎
𝑖
𝑤𝑖 𝑣𝑖 + 𝑏
𝜎 𝑥 =
1
1 + 𝑒−𝑥
Sigmoid function
0.0
0.2
0.4
0.6
0.8
1.0
-6.0
-5.0
-4.0
-3.0
-2.0
-1.0
0.0
1.0
2.0
3.0
4.0
5.0
6.0
Single node output
Multiple nodes output
(Single Layer NN)
𝒉Output
layer
Input
Layer
𝒗
ℎ𝑗 = 𝜎
𝑖
𝑤𝑖𝑗 𝑣𝑖 + 𝑏𝑗
𝒉 = 𝜎 𝑾 𝑇 𝒗 + 𝒃
Vector representation of
Single layer NN
It is equivalent to the
inference model of the RBM
12. Single layer NN to Deep NN
11
The deep NN is build up by stacking single layer NNs.
1st NN
2nd NN
k-th NN
Output data
Input NN
The output of the single layer NN
will be the input of the next single
layer NN.
The output data of the deep NN
is inferred by iterating the
process.
13. Parameters estimation for deep NN
12
Parameters are estimated by gradient
descent algorithm which minimizes
the difference between the output
data and teach data.
x
y
x0
x1
x2
The deep NN is build up by stacking single layer NNs.
1st NN
2nd NN
k-th NN
Teach data
Input NN
14. Parameters estimation for deep NN
13
Parameters are estimated by gradient
descent algorithm which minimizes
the difference between the output
data and teach data.
The deep NN is build up by stacking single layer NNs.
1st NN
2nd NN
k-th NN
Teach data
Input NN
Back-propagation:
The gradients can be calculated as
propagating the information backward.
15. Why the pre-training is necessary?
14
1st NN
2nd NN
k-th NN
Teach data
Input NN
The back-propagation calculates the
gradient from the output layer to the input
layer.
The information of the back-propagation
can not reach the deep layers.
Deep layers(1st layer, 2nd layer, …) are
better to be learned by the unsupervised
learning.
Pre-training with the RBMs.
16. Pre-training with RBMs
15
1st NN
2nd NN
k-th NN
Input data
Single layer NN
RBM
Data
The inference of the single layer NN is
mathematically equivalent to the inference of
the RBM.
The RBM parameters are estimated by maximum
likelihood algorithm with given training data.
17. Pre-training and fine-tuning
16 Training data
Output data
Pre-training for
1st layer RBM
Training data
Output data
Pre-training for
2nd layer RBM
Input data
Teach data Back
propagation
copy
copy
copy
Fine-tuning of deep NN
Pre-training with RBMs
18. Feature vector extraction
17 Training data
Output data
Pre-training for
1st layer RBM
Training data
Output data
Pre-training for
2nd layer RBM
Input data
Feature
copy
copy
copy
Pre-training with RBMs
19. Outline
1. Examples of Deep NNs
2. RBM to Deep NN
3. Deep Neural Network (Deep NN)
– Back-Propagation (Supervised Learning)
4. Restricted Boltzmann Machine (RBM)
– Mathematics, Probabilistic Model and Inference Model
– Pre-training by Contradictive Diverse Learning
(Unsupervised Learning)
5. Inference Model with Distribution
18
http://bit.ly/dnnicpr2014
20. Back-Propagation Algorithm
19
Input data
Teach data
Back
propagation
Output data
𝒉 = 𝜎 𝑾 𝑇
𝒗 + 𝒃Vector representation of
the single layer NN
The goal of learning:
Weights W and bias b of the each layer are
estimated, so that the differences between
the output data and the teach data are
minimized.
𝐼 =
1
2
𝑘
ℎ 𝑘
(𝐿)
− 𝑡 𝑘
2Objective
function
Efficient calculation of the gradient is important.
𝜕𝐼
𝜕𝑾(ℓ)
Back-propagation algorithm is an efficient
algorithm to calculate the gradients.
21. Back-Propagation:
Gradient of the sigmoid function
20
𝜎 𝑥 =
1
1 + 𝑒−𝑥
Sigmoid function Gradient of the
sigmoid function
𝜕𝜎
𝜕𝑥
= 1 − 𝜎 𝑥 𝜎(𝑥)
Derivation of the gradient of the sigmoid function
𝜕𝜎
𝜕𝑥
=
𝜕
𝜕𝑥
1
1 + 𝑒−𝑥
= −
1
1 + 𝑒−𝑥 2
× −𝑒−𝑥 =
𝑒−𝑥
1 + 𝑒−𝑥 2
=
𝑒−𝑥
1 + 𝑒−𝑥
×
1
1 + 𝑒−𝑥
= 1 −
1
1 + 𝑒−𝑥
1
1 + 𝑒−𝑥
= (1 − 𝜎 𝑥 )𝜎 𝑥
22. Back-Propagation: Simplification
21
Single layer NN
𝒉
Output
layer
Input
layer
𝒗
ℎ𝑗 = 𝜎
𝑖
𝑤𝑖𝑗 𝑣𝑖 + 𝑏𝑗
𝒉 = 𝜎 𝑾 𝑇 𝒗 + 𝒃
𝑾′ =
𝑾
𝒃 𝑇 𝒗′ =
𝒗
1
𝒉 = 𝜎 𝑾 𝑇 𝒗 + 𝒃
= 𝜎 𝑾′ 𝑇 𝒗′
Here and after,
Let’s consider only weight W
𝒉 = 𝜎 𝑾 𝑇 𝒗
Vector representation of
the single layer NN
26. Tip for gradient calculation debug
25
𝐼(𝜽) =
1
2
𝑘
ℎ 𝑘
𝐿
(𝒗; 𝜽) − 𝑡 𝑘
2
𝜕𝐼
𝜕𝜃𝑖
Objective
function
Gradient calculated by
the back-propagation
𝜕𝐼
𝜕𝜃𝑖
= lim
𝜀→0
𝐼 𝜽 + 𝜀𝟏𝑖 − 𝐼 𝜽
𝜀
𝟏𝑖:i-th element is 1,
others are 0
∆𝑖 𝐼 =
𝐼 𝜽 + 𝜀𝟏𝑖 − 𝐼 𝜽
𝜀
Computational efficient
Difficult implementation
Computational inefficient
Easy implementation
Definition of gradient
Differential approximation
For small 𝜀, ∆𝑖 𝐼 ≑
𝜕𝐼
𝜕𝜃 𝑖
27. Stochastic Gradient descent algorithm
(Mini-batch learning)
26
{ 𝒗, 𝒕 1, 𝒗, 𝒕 2, ⋯ , 𝒗, 𝒕 𝑛 ⋯ , 𝒗, 𝒕 𝑁 }
Parameters 𝜽 are learned with samples
𝐼 𝑛(𝜽) is the objective function associated to 𝒗, 𝒕 𝑛
𝒗, 𝒕 7
𝒗, 𝒕 10
𝒗, 𝒕 1
𝒗, 𝒕 2
𝒗, 𝒕 3
𝒗, 𝒕 4
𝒗, 𝒕 11𝒗, 𝒕 5
𝒗, 𝒕 6
𝜽 ← 𝜽 − 𝜂
𝜕𝐼
𝜕𝜽𝒗, 𝒕 7
𝒗, 𝒕 9𝒗, 𝒕 2
𝒗, 𝒕 11
Sampling
𝒗, 𝒕 6
𝒗, 𝒕 10𝒗, 𝒕 3
𝒗, 𝒕 12
𝒗, 𝒕 9
𝜽 ← 𝜽 − 𝜂
𝜕𝐼
𝜕𝜽
Sampling𝒗, 𝒕 12
𝒗, 𝒕 8
To avoid the overfitting, the parameters
are updated with each mini-batch.
Whole training data
28. Practical update of parameters
27
G. Hinton,
A Practical Guide to Training
Restricted Boltzmann Machines 2010.
Size of mini-batch:10 - 100
Learning rate 𝜂:Empirically determined
Weight decay rate 𝜆:0.01 - 0.00001
Momentum rate 𝜈:0.9 (initially 0.5)
𝜃(𝑡+1) = 𝜃(𝑡) + ∆𝜃(𝑡)
∆𝜃(𝑡)
= −𝜂
𝜕𝐼
𝜕𝜃
− 𝜆𝜃(𝑡)
+ 𝜈∆𝜃(𝑡−1)
Update rule
Gradient Weight decay momentum
Weight decay is to avoid the
unnecessary diverse.
(especially for sigmoid function)
Momentum is to avoid unnecessary
oscillation of update amount.
The similar effect of the conjugate
gradient algorithm is expected.
29. Outline
1. Examples of Deep NNs
2. RBM to Deep NN
3. Deep Neural Network (Deep NN)
– Back-Propagation (Supervised Learning)
4. Restricted Boltzmann Machine (RBM)
– Mathematics, Probabilistic Model and Inference Model
– Pre-training by Contradictive Diverse Learning
(Unsupervised Learning)
5. Inference Model with Distribution
28
http://bit.ly/dnnicpr2014
30. Restricted Boltzmann Machines
29
Boltzmann Machines
Boltmann machine is a probabilistic model represented by
undirected graph (nodes and edges).
Here, the binary state {0,1} is considered as the state of the nodes.
Unrestricted and restricted Boltzmann machine
v:visible layer
h:hidden layer
v:visible layer
h:hidden layer
• (Unrestricted) Boltzmann machine • Restricted Boltzamann machine
(RBM)
Every node is connected each other.
There is no edge in the same layer.
It helps analysis.
35. Outline
1. Examples of Deep NNs
2. RBM to Deep NN
3. Deep Neural Network (Deep NN)
– Back-Propagation (Supervised Learning)
4. Restricted Boltzmann Machine (RBM)
– Mathematics, Probabilistic Model and Inference Model
– Pre-training by Contradictive Diverse Learning
(Unsupervised Learning)
5. Inference Model with Distribution
34
http://bit.ly/dnnicpr2014
36. RBM: Contradictive Diverse Learning
35
𝒗(0)
𝒉(0)
𝒗(1)
𝒉(1)
𝒉(0) = 𝜎 𝑾 𝑇 𝒗(𝟎) + 𝒃
𝒗(1) = 𝜎 𝑾𝒉(𝟎) + 𝒄
𝒉(1) = 𝜎 𝑾 𝑇 𝒗(𝟏) + 𝒃
𝑾 ← 𝑾 − 𝜀 Δ𝑾
Δ𝑾 =
𝟏
𝑁
𝑛
𝒗 𝑛(0)
𝑇
𝒉 𝑛(0) − 𝒗 𝑛(1)
𝑇
𝒉 𝑛(1)
Iterative process of the CD learning
The CD learning can be considered the maximum
likelihood estimation with given training data.
Momentum and
weight decay
are also applied.
37. RBM: Outline of the CD learning
36
v:visible layer {0,1}
h:hidden layer {0,1}
パラメータθ:Weight W, Bias b,c
𝑃 𝒗, 𝒉; 𝜽 =
1
𝑍 𝜽
exp −𝐸 𝒗, 𝒉; 𝜽
𝐸 𝒗, 𝒉; 𝜽 = −
𝑖,𝑗
𝑣𝑖 𝑤𝑖𝑗ℎ𝑗 −
𝑗
𝑏𝑗ℎ𝑗 −
𝑖
𝑐𝑖 𝑣𝑖
= −𝒗 𝑇 𝑾𝒉 − 𝒃 𝑇 𝒉 − 𝒄 𝑇 𝒗
• The maximum likelihood estimation with given training data {vn}.
• (Approximated) EM algorithm is applied to handle unobserved
hidden data.
• The Gibbs sampling is applied to evaluate the partition function.
• The Gibbs sampling is approximated by single sampling.
Outline of the Contrastive Divergence Learning
38. CD learning: Maximum likelihood
37
v:visible layer {0,1}
h:hidden layer {0,1} 𝑃 𝒗, 𝒉; 𝜽 =
1
𝑍 𝜽
exp −𝐸 𝒗, 𝒉; 𝜽
𝐸 𝒗, 𝒉; 𝜽 = −
𝑖,𝑗
𝑣𝑖 𝑤𝑖𝑗ℎ𝑗 −
𝑗
𝑏𝑗ℎ𝑗 −
𝑖
𝑐𝑖 𝑣𝑖
= −𝒗 𝑇 𝑾𝒉 − 𝒃 𝑇 𝒉 − 𝒄 𝑇 𝒗
RBM is probabilistic model
The maximum likelihood
gives parameters for given
training data.
Visible data are given.
Hidden data are not given.
Integrate the hidden data.
Log likelihood for training data {vn}
𝐿 𝜽 =
𝑛
𝐿 𝑛 𝜽 =
𝑛
log 𝑃 𝒗 𝑛; 𝜽
=
𝑛
log 𝐸 𝒉∼𝑃(𝒉|𝒗 𝑛;𝜽) 𝑃(𝒗 𝑛, 𝒉; 𝜽)
The optimization is performed by
the EM algorithm.
𝐿 𝑛 𝜽 = log 𝐸 𝒉∼𝑃(𝒉|𝒗 𝑛;𝜽) 𝑃(𝒗 𝑛, 𝒉; 𝜽)
39. EM Algorithm
38 Reference: これなら分かる最適化数学,金谷健一
𝐿 𝑛 𝜽 = log 𝑃 𝒗 𝑛; 𝜽 = log 𝐸 𝒉∼𝑃(𝒉|𝒗 𝑛;𝜽) 𝑃(𝒗 𝑛, 𝒉; 𝜽)
Log likelihood for training data {vn}
The EM algorithm monotonically increases the log likelihood.
EM algorithm
1. Initialize parameter 𝜽 by 𝜽0 . Set 𝜏 = 0.
2. Evaluate following function(E-step)
3. Find 𝜽 𝜏 which maximizes 𝑄 𝜏 𝜽 (M-step)
4. Set 𝜏←𝜏 + 1, then step2. Iterate until it is converged.
𝑄 𝜏 𝜽 = 𝐸 𝒉∼𝑃(𝒉|𝒗 𝑛;𝜽 𝜏) log 𝑃(𝒗 𝑛, 𝒉; 𝜽)
※In the CD learning, the M-step is approximated.
47. Probabilities or status?
46
𝒗(0)
𝒉(0)
𝒗(1)
𝒉(1)
𝒉(0) = 𝜎 𝑾 𝑇 𝒗(𝟎) + 𝒃
𝒗(1) = 𝜎 𝑾𝒉(𝟎) + 𝒄
𝒉(1) = 𝜎 𝑾 𝑇 𝒗(𝟏) + 𝒃
G. Hinton,
A Practical Guide to Training
Restricted Boltzmann Machines 2010.
The inference of the RBM gives the probabilities.
In the Gibbs sampling, should we sampling the status with the probabilities ?
Or should we simply use the probabilities ?
Hinton recommends to use the probabilities.
𝑃 𝒉 = 𝟏 𝒗; 𝜽 = 𝜎 𝑾 𝑇 𝒗 + 𝒃
Inference of RBM
𝑃 𝒗 = 𝟏 𝒉; 𝜽 = 𝜎 𝑾𝒉 + 𝒄
48. RBM: Contradictive Diverse Learning
47
𝒗(0)
𝒉(0)
𝒗(1)
𝒉(1)
𝒉(0) = 𝜎 𝑾 𝑇 𝒗(𝟎) + 𝒃
𝒗(1) = 𝜎 𝑾𝒉(𝟎) + 𝒄
𝒉(1) = 𝜎 𝑾 𝑇 𝒗(𝟏) + 𝒃
𝑾 ← 𝑾 − 𝜀 Δ𝑾
Δ𝑾 =
𝟏
𝑁
𝑛
𝒗 𝑛(0)
𝑇
𝒉 𝑛(0) − 𝒗 𝑛(1)
𝑇
𝒉 𝑛(1)
Iterative process of the CD learning
The CD learning can be considered the maximum
likelihood estimation with given training data.
Momentum and
weight decay
are also applied.
49. Pre-training for the stacked RBMs
48 Training data
Output data
Pre-training for
1st layer RBM
Training data
Output data
Pre-training for
2nd layer RBM
Input data
copy
copy
copy
Pre-training for the RBMs
50. Outline
1. Examples of Deep NNs
2. RBM to Deep NN
3. Deep Neural Network (Deep NN)
– Back-Propagation (Supervised Learning)
4. Restricted Boltzmann Machine (RBM)
– Mathematics, Probabilistic Model and Inference Model
– Pre-training by Contradictive Diverse Learning
(Unsupervised Learning)
5. Inference Model with Distribution
49
http://bit.ly/dnnicpr2014
51. Drop-out
50
The drop-out is expected to be similar
to ensemble learning.
It is effective to avoid the overfitting.
Drop-out
The nodes are randomly dropped out for
each mini-batch. The output of the
dropped node is zero.
50% drop-out rate is recommended.
G. Hinton, N.Srivastava, A.Krizhevsky, I.Sutskever, and
R.Salakhutdinov, “Improving neural networks by preventing
co-adaptation of feature detectors.”, arXiv preprint
arXiv:1207.0580, 2012.
Input
layer
Output
layer
52. Ensemble learning and Drop-out
51
Input
layer
Output
layer
Ensemble learning Drop-out
Integration of multiple weak learner > single learner
𝒉 𝒗 =
1
𝐾
𝒉1 𝒗 + 𝒉2 𝒗 + 𝒉3 𝒗 + ⋯ + 𝒉 𝐾 𝒗
The drop-out is
expected to be
similar effect to
ensemble learning
58. Outline
1. Examples of Deep NNs
2. RBM to Deep NN
3. Deep Neural Network (Deep NN)
– Back-Propagation (Supervised Learning)
4. Restricted Boltzmann Machine (RBM)
– Mathematics, Probabilistic Model and Inference Model
– Pre-training by Contradictive Diverse Learning
(Unsupervised Learning)
5. Inference Model with Distribution
57
http://bit.ly/dnnicpr2014