Generative Modelling with Tensor Train approximations of Hamilton–Jacobi–Bellman equations
CoRR(2024)
Abstract
Sampling from probability densities is a common challenge in fields such as
Uncertainty Quantification (UQ) and Generative Modelling (GM). In GM in
particular, the use of reverse-time diffusion processes depending on the
log-densities of Ornstein-Uhlenbeck forward processes are a popular sampling
tool. In Berner et al. [2022] the authors point out that these log-densities
can be obtained by solution of a Hamilton-Jacobi-Bellman (HJB)
equation known from stochastic optimal control. While this HJB equation is
usually treated with indirect methods such as policy iteration and unsupervised
training of black-box architectures like Neural Networks, we propose instead to
solve the HJB equation by direct time integration, using compressed polynomials
represented in the Tensor Train (TT) format for spatial discretization.
Crucially, this method is sample-free, agnostic to normalization constants and
can avoid the curse of dimensionality due to the TT compression. We provide a
complete derivation of the HJB equation's action on Tensor Train polynomials
and demonstrate the performance of the proposed time-step-, rank- and
degree-adaptive integration method on a nonlinear sampling task in 20
dimensions.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined