谷歌浏览器插件
订阅小程序
在清言上使用

Indoor Periodic Fingerprint Collections by Vehicular Crowdsensing via Primal-Dual Multi-Agent Deep Reinforcement Learning

Haoming Yang, Qiran Zhao,Hao Wang,Chi Harold Liu, Guozheng Li,Guoren Wang, Jian Tang,Dapeng Wu

IEEE Journal on Selected Areas in Communications(2024)

引用 0|浏览11
暂无评分
摘要
Indoor localization is drawing more and more attentions due to the growing demand of various location-based services, where fingerprinting is a popular data driven techniques that does not rely on complex measurement equipment, yet it requires site surveys which is both labor-intensive and time-consuming. Vehicular crowdsensing (VCS) with unmanned vehicles (UVs) is a novel paradigm to navigate a group of UVs to collect sensory data from certain point-of-interests periodically (PoIs, i.e., coverage holes in localization scenarios). In this paper, we formulate the multi-floor indoor fingerprint collection task with periodical PoI coverage requirements as a constrained optimization problem. Then, we propose a multi-agent deep reinforcement learning (MADRL) based solution, “MADRL-PosVCS”, which consists of a primal-dual framework to transform the above optimization problem into the unconstrained duality, with adjustable Lagrangian multipliers to ensure periodic fingerprint collection. We also propose a novel intrinsic reward mechanism consists of the mutual information between a UV’s observations and environment transition probability parameterized by a Bayesian Neural Network (BNN) for exploration, and a elevator-based reward to allow UVs to go cross different floors for collaborative fingerprint collections. Extensive simulation results on three real-world datasets in SML Center (Shanghai), Joy City (Hangzhou) and Haopu Fashion City (Shanghai) show that MADRL-PosVCS achieves better results over four baselines on fingerprint collection ratio, PoI coverage ratio for collection intervals, geographic fairness and average moving distance.
更多
查看译文
关键词
Vehicular crowdsensing,Indoor fingerprint collection,Multi-agent deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要