APPROXIMATING QUASI-STATIONARY DISTRIBUTIONS WITH INTERACTING REINFORCED RANDOM WALKS

ESAIM-PROBABILITY AND STATISTICS(2022)

引用 0|浏览7
暂无评分
摘要
We propose two numerical schemes for approximating quasi-stationary distributions (QSD) of finite state Markov chains with absorbing states. Both schemes are described in terms of certain interacting chains in which the interaction is given in terms of the total time occupation measure of all particles in the system and has the impact of reinforcing transitions, in an appropriate fashion, to states where the collection of particles has spent more time. The schemes can be viewed as combining the key features of the two basic simulation-based methods for approximating QSD originating from the works of Fleming and Viot (1979) and Aldous, Flannery and Palacios (1998), respectively. The key difference between the two schemes studied here is that in the first method one starts with a(n) particles at time 0 and number of particles stays constant over time whereas in the second method we start with one particle and at most one particle is added at each time instant in such a manner that there are a(n) particles at time n. We prove almost sure convergence to the unique QSD and establish Central Limit Theorems for the two schemes under the key assumption that a(n) = o(n). When a(n) similar to n, the fluctuation behavior is expected to be non-standard. Some exploratory numerical results are presented to illustrate the performance of the two approximation schemes.
更多
查看译文
关键词
Quasi-stationary distributions, stochastic approximation, interacting particles, central limit theorem, reinforced random walks, self-interaction, Fleming-Viot particle approximations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要