e, G. (2008). Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media. Arbel, M., Korba, A., Salim, A., and Gretton, A. (2019). Maximum mean discrepancy gradient flow. In Advances in Neural Information Processing Systems, pages 6481–6491. Chewi, S., Le Gouic, T., Lu, C., Maunu, T., and Rigollet, P. (2020). Svgd as a kernelized wasserstein gradient flow of the chi-squared divergence. Advances in Neural Information Processing Systems, 33:2098–2109. Chizat, L. and Bach, F. (2018). On the global convergence of gradient descent for over-parameterized models using optimal transport. Advances in neural information processing systems, 31. Chopin, N., Crucinio, F. R., and Korba, A. (2023). A connection between tempering and entropic mirror descent. arXiv preprint arXiv:2310.11914. Craig, K., Elamvazhuthi, K., Haberland, M., and Turanova, O. (2022). A blob method method for inhomogeneous diffusion with applications to multi-agent control and sampling. arXiv preprint arXiv:2202.12927. Dolbeault, J., Gentil, I., Guillin, A., and Wang, F.-Y. (2007). Lq-functional inequalities and weighted porous media equations. arXiv preprint math/0701037. Durmus, A. and Moulines, E. (2016). Sampling from strongly log-concave distributions with the unadjusted langevin algorithm. arXiv preprint arXiv:1605.01559, 5. 1 / 9