Taiji Suzuki (University of Tokyo, Japan) Convergence of mean field Langevin dynamics and its application to neural network featureΒ learning
WORKSHOP ON OPTIMAL TRANSPORT
FROM THEORY TO APPLICATIONS
INTERFACING DYNAMICAL SYSTEMS, OPTIMIZATION, AND MACHINE LEARNING
Venue: Humboldt University of Berlin, DorotheenstraΓe 24
neural network feature learning 1 Taiji Suzuki The University of Tokyo / AIP-RIKEN 15th/Mar/2024 Workshop on Optimal Transport, Berlin (Deep learning theory team)
β’ Entropy regularization: Application: Training 2-layer NN in mean field regime. [Convergence] β’ We introduce mean field Langevin dynamics (MFLD) to minimize β. β’ We show its linear convergence under a log-Sobolev inequality condition. [Generalization error analysis] β’ A generalization error analysis of 2-layer NN trained by MFLD is given. β’ Separation from kernel methods is shown.
network is basically non-convex. β’ Noisy gradient descent (e.g., SGD) is effective for non- convex optimization. Noisy perturbation is helpful to escape a local minimum. β’ Likely converges to a flat global minimum.
Discretization [Gelfand and Mitter (1991); Borkar and Mitter (1999); Welling and Teh (2011)] Stationary distribution: Can stay around the global minimum of πΉ(π₯). Regularized loss:
(we can assume it has a density) PDE that describes ππ‘βs dynamics [Fokker-Planck equation]: [linear w.r.t. π] = Stationary distribution This is the Wasserstein gradient flow to minimize the following objective: c.f., Donsker-Varadan duality formula β
it converge? NaΓ―ve application of existing theory in gradient Langevin dynamics yields iteration complexity to achieve π error. β Cannot be applied to wide neural network. [Raginsky, Rakhlin and Telgarsky, 2017; Xu, Chen, Zou, and Gu, 2018; Erdogdu, Mackey and Shamir, 2018; Vempala and Wibisono, 2019] β
π β β β¦ β Mean field limit: Non-linear with respect to the parameters ππ , π€π π=1 π . Convex w.r.t. π if the loss βπ is convex (e.g., squared / logistic loss). [Nitanda&Suzuki, 2017][Chizat&Bach, 2018][Mei, Montanari&Nguyen, 2018][Rotskoff&Vanden-Eijnden, 2018] Linear with respect to π.
Fokker-Planck equation of which corresponds to the Wasserstein GF: πΉ Gradient convex strictly convex = strictly convex + Mean field Langevin dynamics: The first variation πΏπΉ πΏπ : π« Γ βπ β β is defined as a continuous functional such as Definition (first variation) GLD: , β’ β’
β’The proximal Gibbs measure is a kind of βtentativeβ target. β’It plays important role in the convergence analysis. Linearized objective at π:
Wu, Suzuki (AISTATS2022)][Chizat (2022)] Assumption (Log-Sobolev inequality) KL-div Fisher-div There exists πΌ > 0 such that for any probability measure π (abs. cont. w.r.t. ππ), If πππ‘ satisfies the LSI condition for any π‘ β₯ 0, then This is a non-linear extension of well known GLD convergence analysis. c.f., Polyak-Lojasiewicz condition π π₯ β π π₯β β€ πΆ π»π π₯ 2 The rate of convergence is characterized by LSI constant
time dynamics. Question: Can we evaluate a finite particles & discrete time approximation errors? 17 (distribution of ππ‘) Neuron π₯ (vector field) (Finite particle approximation)
π‘ = 1 π‘ = 2 π‘ = 3 π‘ = 4 Finite particle approximation error can be amplified through time. β It is difficult to bound the perturbation uniformly over time. The particles behave as if they are independent as the number of particles increases to infinity. Propagation of chaos [Sznitman, 1991; Lacker, 2021]: β’ A naΓ―ve evaluation gives exponential growth on time: β’ Weak interaction/Strong regularization in existing work exp π‘ /π [Mei et al. (2018, Theorem 3)]
smoothness and boundedness of the loss function, it holds that Suppose that ππ satisfies log-Sobolev inequality with a constant πΌ. Theorem (One-step update) [Suzuki, Wu, Nitanda (2023)] : proximal Gibbs measure 1. πΉ: π« β β is convex and has a form of π π = π³ π + ππ πΌπ π π . 2. (smoothness) π»πΏπΏ π πΏπ π₯ βπ»πΏπΏ π πΏπ π¦ β€ πΆ(π2 π, π + π₯ β π¦ ) and (boundedness) π»πΏπΏ π πΏπ π₯ β€ π . Assumption: (+ second order differentiability) NaΓ―ve bound: [Suzuki, Wu, Nitanda: Convergence of mean-field Langevin dynamics: Time and space discretization, stochastic gradient, and variance reduction. arXiv:2306.07221] π(π/π΄)
π π (π) π³π = π π π π=1 π βΌ π π π : Joint distribution of π particles. Potential of the joint distribution π π (π΄) on βπ Γπ΄ : where (Fisher divergence) where β’ The finite particle dynamics is the Wasserstein gradient flow that minimizes . (Approximate) Uniform log-Sobolev inequality [Chen et al. 2022] Recall [Chen, Ren, Wang. Uniform-in-time propagation of chaos for mean field Langevin dynamics. arXiv:2212.03050, 2022.] For any π΄, Reference
MFLD. β How effective is the feature learning of MFLD in terms of generalization error? 25 β’ Benefit of feature learning? Neural network vs Kernel method (NTK vs mean field)
π π . then it holds that with probability Theorem 1: πΌ[Class. Error] β€ O exp(βO(π/π 2)) if π β₯ π 2. Theorem 2: Suppose that π = Ξ(1/π ) and If we have sufficiently large training data, we have exponential convergence of test error. We only need to evaluate π to obtain a test error bound.
high dimensional data β’ π βΌ Unif( β1,1 π) (up to freedom of rotation) β’ π = Οπ=1 π ππ Table 1 of [Telgarsky: Feature selection and low test error in shallow low-rotation ReLu networks, ICLR2023]. Q: Can we learn sparse π-parity with GD? Is there any benefit of neural network? β» Suppose that we donβt know which coordinate ππ is aligned to. π = 2: XOR problem π = 3, π = 2 Complexity to learn XOR function (π = 2) Only the first π-coordinates are informative.
Theorem 1: πΌ[Class. Error] β€ O exp(βO(π/π 2)) Theorem 2: if π β₯ π 2. πβ: KL π, πβ β€ π , πππβ π β₯ π0 (perfect classifier with margin π0) Suppose that there exists πβ such that For the π-parity problem, we may take Then, Lemma We can evaluate π required for the π-sparse parity problem: Reminder
β’ Setting 1: π > π β’ Test error (classification error) = π(exp(βπ/π π)) β’ Test error (classification error) = π( Ξ€ π π) These are better than NTK (kernel method); Sample complexity of NTK π = π π π vs NN π = π(π ) Trade-off between computational complexity and sample complexity. Our analysis provides β’ better sample complexity β’ discrete-time/finite-width analysis β’ π and π are βdecoupled.β Corollary (Test accuracy of MFLD) (Computational complexity is exp O π (But, can be relaxed to O(1) if X is anisotropic))
Ξ€ π π). β’Computational complexity is O(exp π ). 33 If data has anisotropic covariance, sample / computational complexities can be much improved. True signal True signal Isotropic data distribution Anisotropic data distribution # of iterations: (π = π yields exp(π)) The data structure affects the complexities.
number of iterations can be bounded by 38 By substituting, then the # of iterations can be summarized as β’ Isotropic setting: β’ Anisotropic setting: Anisotropic structure mitigate the computational complexity. Especially, there is no exponential dependency on π when πΆ = π. (assuming π = π(1))
Kernel: Mean field NN can βdecoupleβ π and π , while kernel has exponential relation between them. Setting: Thm For arbitrary πΏ > 0, the sample complexity of kernel methods is lower bounded as (kernel)
we may estimate the βinformative directionβ by the gradients at the initialization. β’ Then, πΊ estimates the informative direction. By the following coordinate transformation, we may take π independent of π (exp π β exp(π)): True signal True signal Isotropic input (avoiding curse of dim)
sample complexity is optimal for methods with polynomial order computational complexity. [Abbe et al. (2023); Refinetti et al. (2021); Ben Arous et al. (2022); Damian et al. (2022)] β’ On the other hand, our analysis is about full-batch GD. 43 Minibatch size # of iterations Sample complexity Our analysis π ππ π SGD (CSQ-lower bound) 1 ππβ1 ππβ1 We obtain a better sample complexity than O(ππβ1) with higher computational complexity. β We can obtain a polynomial order method with MFLD for anisotropic input.
of 2-layer NNs β’ Optimizing convex functional β’ Convergence guarantee (Wasserstein gradient flow, Uniform-in-time propagation of chaos) β’ Generalization error of mean field 2-layer NN β’ Fast learning rate β’ Sparse π-parity problem β’Better sample complexity than kernel methods β’Structure of data (anisotropic covariance) can improve the complexities. 44 Kernel Mean field Mean field Kernel lower bound Kernel lower bound