Linear Convergence Bounds for Diffusion Models via Stochastic Localization
Diffusion models are a powerful method for generating approximate samples from high-dimensional data distributions. Several recent results have provided polynomial bounds on the convergence rate of such models, assuming L^2-accurate score estimators. However, up until now the best known such bounds were either superlinear in the data dimension or required strong smoothness assumptions. We provide the first convergence bounds which are linear in the data dimension (up to logarithmic factors) assuming only finite second moments of the data distribution. We show that diffusion models require at most Õ(d log^2(1/δ)/ε^2) steps to approximate an arbitrary data distribution on ℝ^d corrupted with Gaussian noise of variance δ to within ε^2 in Kullback–Leibler divergence. Our proof builds on the Girsanov-based methods of previous works. We introduce a refined treatment of the error arising from the discretization of the reverse SDE, which is based on tools from stochastic localization.
READ FULL TEXT