Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Information Geometry of Dropout Training

Masanari Kimura
September 24, 2022

Information Geometry of Dropout Training

Masanari Kimura

September 24, 2022
Tweet

More Decks by Masanari Kimura

Other Decks in Research

Transcript

  1. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Information Geometry of Dropout Training Masanari Kimura1 Hideitsu Hino2,2 1Department of Statistical Science School of Multidisciplinary Sciences The Graduate University for Advanced Studies (SOKENDAI) [email protected] 2The Institute of Statistical Mathematics 3RIKEN AIP
  2. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Intro 2/29
  3. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Abstract ▶ Dropout is one of the most popular regularization techniques in neural network training; ▶ Because of its power and simplicity of idea, dropout has been analyzed extensively and many variants have been proposed; ▶ In this paper, several properties of dropout are discussed in a unified manner from the viewpoint of information geometry: ▶ dropout flattens the model manifold and that their regularization performance depends on the amount of the curvature; ▶ dropout essentially corresponds to a regularization that depends on the Fisher information; ▶ support this result from numerical experiments. 3/29
  4. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Background 4/29
  5. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Empirical Risk Minimization The goal of supervised learning is to obtain a hypothesis h : X → Y (h ∈ H) with the training set D = {(xi, yi)}N i=1 of sample size N ∈ N that minimizes the expected loss: R(h) := E(x,y)∼P [ ℓ(h(x), y) ] , (1) where P is the unknown distribution. In general, access to the unknown distribution P is forbidden. If the i.i.d. assumption holds, i.e., D is generated independently and identically from P, then the following empirical risk minimization (ERM) can approximate the expected risk minimization problem [Vapnik, 1999, 2013]: minimize ˆ R(h), ˆ R(h) := 1 N N ∑ i=1 ℓ(h(xi), yi)). (2) 5/29
  6. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References ERM and overfitting ▶ ERM framework is very powerful and known to be consistent: E(x,y)∼P [ ˆ R(h) ] = 1 N N ∑ i=1 E(x,y)∼P [ ℓ(h(x), y) ] = E [ ℓ(h(x), y) ] = R(h). (3) ▶ However, ERM often suffers from the problem of overtraining due to insufficient sample size of training data or too complex hypothesis classes (e.g. neural networks). 6/29
  7. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Dropout training ▶ Especially when the hypothesis class is a set of neural networks, the dropout [Srivastava et al., 2014] is a frequently used technique to prevent overfitting; ▶ Dropout training aims to improve the generalization performance by repeating the procedure of randomly deleting neurons during the training of neural networks; ▶ In this paper, dropout is once again analyzed through the lens of information geometry. 7/29
  8. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Information geometry of machine learning ▶ It is known that machine learning procedures can be formulated on Riemannian manifolds [Amari, 1995; Amari and Nagaoka, 2000; Amari, 2016; Ay et al., 2017]; Example (Manifold of one-layer neural networks:) As an example, we now consider a one-layer neural network with sigmoid function σ(z) = 1/(1 + e−z). Let x ∈ Rn be an n-dimensional input and y = σ(θTx + θ0 ) be the one-dimensional output, where θ ∈ Rn and θ0 ∈ R are the weights and the bias. Then the set of outputs Hσ = {h(x; θ, θ0 ) | θ ∈ Rn, θ0 ∈ R} = {σ(θTx + θ0 ) |θ ∈ Rn, θ0 ∈ R} can be regarded as an (n + 1)-dimensional manifold, parameterized by θ and θ0 . 8/29
  9. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Notations of machine learning on manifolds Let Θ be a parameter space, and θ ∈ Θ be a parameter of some hypothesis h(x; θ). Suppose that the hypothesis class Hθ has the following structure: Hθ := { h(x; θ) = arg max y∈Y p(y|x; θ) | p ∈ M, θ ∈ Θ } . Assume now that a hypothesis h ∈ Hθ is trained to approximate the target distribution q. In general, target distributions are not included in M. In this case, we need to find a distribution p∗ = arg minp∈M D(q, p) for some divergence D, which corresponds to the orthogonal projection of q on the surface of M. Since we cannot observe the true target distribution q, we use the empirical distribution qemp to approximate q. Here, we assume that p, q and qemp are parameterized by θp , θq and ηq . 9/29
  10. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Information geometry of dropout 10/29
  11. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Machine learning as the projections ▶ Let M := {p(y|x; θ) | θ ∈ Θ} be a class of neural networks and P be the space of probability measures. ▶ ERM can be regarded as finding the projection from the empirical distribution qemp(y|x; ηq) to M: θ∗ = arg min θ∈Θ D[qemp∥pθ ], (4) where pθ = p(y|x; θ), and D[·∥·] : P × P → R+ is some divergence. 11/29
  12. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Dropout as the projections ▶ Dropout can be viewed as considering multiple projections onto submanifolds {MD k }K k=1 with some of the parameters θ set to 0: MD k = {p(y|x; θk) | θk ∈ Θk ⊂ Θ}, (5) where Θk ⊂ Θ such that for Ik ⊂ {1, 2, . . . , d}, θkj = 0 (∀j ∈ Ik) and {Ik}K k=1 is a set of possible index sets, and K is the number of dropout patterns. 12/29
  13. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Weighted averaged projections and dropout Here, the training of a neural network applying dropout can be regarded as obtaining the weighted average of the projections from the empirical distribution qemp(y|x; ηq) to {MD k }K k=1 . θ∗ D = K ∑ k=1 wkθ∗ k , θ∗ k = arg min θk∈Θk D[qemp∥pθk ], (6) where w = {w1 , . . . , wK}, ∑K k=1 wk = 1. ▶ In an ordinary dropout, the weights w are all identical and wk = 1 K , k = 1, . . . , K. 13/29
  14. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Flatness of dropout submanifolds ▶ Although θ∗ D is a weighted average of the parameters on the submanifolds {Md k }K k=1 , this parameter is not necessarily contained in the original model manifold M; ▶ θ∗ D is included in M if the model manifold is flat in the parameter coordinate system: 14/29
  15. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References When dropout outperforms ERM ▶ Geometrically, the conditions under which dropout training is superior to ordinary ERM are shown; ▶ Figure shows that when the curvature of model manifold M is negative, the dropout estimator could outperforms the ERM because it can be get closer to the empirical distribution, and when the curvature is positive, the dropout estimator is inferior to the ERM. 15/29
  16. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Dropout submanifolds and Regularization 16/29
  17. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Regularization term equivalent to dropout ▶ In the previous section, we see that the dropout can be regarded as flattening the model manifold; ▶ We consider a regularization term that is equivalent to dropout. 17/29
  18. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References The flatness of a manifold M can be expressed in the following second fundamental form for tangent vectors U, V belonging to its tangent space TpM: L(U, V) = (∇U V)⊥, (7) where (∇U V)p = (∇U V)∥ p + (∇U V)⊥ p (8) is called as Gauss formula. L(U, V) is symmetric and linear and can be written as L(U, V) = L(ξα , ξβ )UαVβ = Lα,β UαVβ. (9) ▶ Obviously, L = 0 when the coefficients Lα,β vanish. 18/29
  19. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Here, we define the norm as follows ∥L∥ = max { ∥L(U, U)∥ ∥U∥ | U ∈ TpM } . (10) From the symmetry of L, the eigenvalues of suitably normalized L are max ∥U∥=1 |Lk(U, U)| = |λk|, (11) and ∥L∥ = (λ2 1 , . . . , λ2 m )1 2 (12) holds. 19/29
  20. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Thus, the regularization using the second fundamental form can be written using its eigenvalues as L(θ; µ) = ℓ(y, φ(x; θ)) + µ∥L∥, (13) where φ is the neural network parameterized by θ. Here, the Levi-Civita connection in tangent space is ∇ξi ξj = ∂2h ∂θi∂θj , so an orthogonal decomposition of this gives ∂2h ∂θi∂θj = ( ∂2φ ∂θi∂θj )∥ + ( ∂2φ ∂θi∂θj )⊥ . (14) From Eqs. (7) and (14), we have Lij = ( ∂2φ ∂θi∂θj )⊥ = ∂2φ ∂θi∂θj − ( ∂2φ ∂θi∂θj )∥ . (15) The first term of Eq. (15) is the Fisher information matrix of the neural network. 20/29
  21. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Connection to other regularizers Using Eq (13), we can relate dropout to other regularizers. First, we re-formulate Eq (13) with some function Φ that depends on the Fisher information matrix I(θ) as follows. L(θ; µ) = ℓ(y, φ(x; θ)) + Φ(I(θ)). (16) Using the fact that the Fisher information matrix can be written in terms of KL divergence with small changes in parameters, i.e., DKL[θ∥θ + dθ] = δ⊤ θ I(θ)δθ , (17) the following remarks can be derived. 21/29
  22. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Remark Let Φ(I(θ)) = DKL[θ∥θ + dθ]. In this case, Eq. (16) is equivalent to knowledge distillation [Hinton et al.; Gou et al., 2021] with teacher model parametrized by θ + dθ. Remark Let Φ(I(θ)) = λ2C 8n + log DKL[θ∥θ + dθ] + 1 ϵ λ , (18) where λ is some parameter, C is a constant, n is the sample size and ϵ is the precision parameter. In this case, Eq. (16) is equivalent to PAC-Bayesian regularization [Catoni, 2003]. 22/29
  23. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Numerical experiments 23/29
  24. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Numerical experiments ▶ We discussed the connection between dropout training and Fiser information matrix; ▶ We confirm this relationship by the numerical experiments; ▶ MNIST handwritten dataset [Deng, 2012] which has a training set of 60, 000 examples, and a test set of 10, 000 examples. Each instance is 28 × 28 gray-scale image. ▶ K-FAC [Martens and Grosse, 2015] method as the approximation of the Fisher information matrix. 24/29
  25. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References ▶ Figure shows the relationship between Fisher information matrix and dropout rate; ▶ We can see that applying dropout reduces the norm of Fisher information matrix. Remark ▶ p = 0.2 or p = 0.3 lead the minimum norm of FIM. ▶ Second term of Eq. (15) is dominant with high dropout rate. 25/29
  26. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Conclusion and discussion 26/29
  27. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References Conclusion and discussion ▶ This study formulates dropout through the lens of information geometry; ▶ Future works: ▶ It is very important future research to derive new dropout-inspired algorithms based on insights into the algorithms from a geometric perspective; ▶ In addition to the analysis of the algorithm in Fisher information geometry, it is expected that an analysis based on optimal transport will allow for a deeper understanding of the algorithm; ▶ we showed that dropout essentially corresponds to a regularization that depends on the Fisher information, and supported this result from numerical experiments. This result suggests that dropout and other regularization methods can be generalized via the Fisher information matrix. 27/29
  28. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References References I Shun-ichi Amari. Information geometry of the em and em algorithms for neural networks. Neural networks, 8(9):1379–1408, 1995. Shun-ichi Amari. Information geometry and its applications, volume 194. Springer, 2016. Shun-ichi Amari and Hiroshi Nagaoka. Methods of information geometry, volume 191. American Mathematical Soc., 2000. Nihat Ay, Jürgen Jost, Hông Vân Lê, and Lorenz Schwachhöfer. Information geometry, volume 64. Springer, 2017. Olivier Catoni. A pac-bayesian approach to adaptive classification. preprint, 840, 2003. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012. Jianping Gou, Baosheng Yu, Stephen J Maybank, and Dacheng Tao. Knowledge distillation: A survey. International Journal of Computer Vision, 129(6):1789–1819, 2021. 28/29
  29. . . . . Intro . . . . .

    . . . . . . . Background . . . . . . . . . . . . Information geometry of dropout . . . . . . . . . . . . . . Dropout submanifolds and Regularization . . . . . . Numerical experiments . . . . Conclusion and discussion References References II Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pages 2408–2417. PMLR, 2015. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014. Vladimir Vapnik. The nature of statistical learning theory. Springer science & business media, 2013. Vladimir N Vapnik. An overview of statistical learning theory. IEEE transactions on neural networks, 10(5):988–999, 1999. 29/29