Regularized optimal transport (OT) has received much attention in recent years starting from Cuturi's paper with Kullback-Leibler (KL) divergence regularized OT. In this paper, we propose to regularize the OT problem using the family of α-Rényi divergences for α∈(0,1). Rényi divergences are neither f-divergences nor Bregman distances, but they recover the KL divergence in the limit α↗1. The advantage of introducing the additional parameter α is that for α↘0 we obtain convergence to the unregularized OT problem. For the KL regularized OT problem, this was achieved by letting the regularization parameter tend to zero, which causes numerical instabilities. We present two different ways to obtain premetrics on probability measures, namely by Rényi divergence constraints and penalization. The latter premetric interpolates between the unregularized and KL regularized OT problem with weak convergence of the minimizer, generalizing the interpolating property of KL regularized OT. We use a nested mirror descent algorithm for solving the primal formulation. Both on real and synthetic data sets Rényi regularized OT plans outperform their KL and Tsallis counterparts in terms of being closer to the unregularized transport plans and recovering the ground truth in inference tasks better.
This is joint work with Jonas Bresch (TU Berlin) available at https://arxiv.org/abs/2404.18834.