Efficient approximation of high-dimensional exponentials by tensor networks

arxiv(2023)

引用 2|浏览3
暂无评分
摘要
In this work a general approach to compute a compressed representation of the exponential exp (h) of a high-dimensional function h is presented. Such exponential functions play an important role in several problems in uncertainty quantification, e.g., the approximation of log-normal random fields or the evaluation of Bayesian posterior measures. Usually, these high-dimensional objects are numerically intractable and can only be accessed pointwise in sampling methods. In contrast, the proposed method constructs a functional representation of the exponential by exploiting its nature as a solution of a partial differential equation. The application of a Petrov-Galerkin scheme to this equation provides a tensor train representation of the solution for which we derive an efficient and reliable a posteriori error estimator. This estimator can be used in conjunction with any approximation method and the differential equation may be adapted such that the error estimates are equivalent to a problem-related norm. Numerical experiments with log-normal random fields and Bayesian likelihoods illustrate the performance of the approach in comparison to other recent low-rank representations for the respective applications. Although the present work considers only a specific differential equation, the presented method can be applied in a more general setting. We show that the proposed method can be used to compute compressed representations of phi(h) for any holonomic function phi.
更多
查看译文
关键词
uncertainty quantification,dynamical system approximation,Petrov-Galerkin,a posteriori error bounds,tensor product methods,tensor train format,holonomic functions,Bayesian likelihoods,log-normal random field
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要