Multi-User Mobile Augmented Reality with ID-Aware Visual Interaction

ACM TRANSACTIONS ON SENSOR NETWORKS(2024)

引用 0|浏览2
暂无评分
摘要
Most existing multi-user Augmented Reality (AR) systems only support multiple co-located users to view a common set of virtual objects but lack the ability to enable each user to directly interact with other users appearing in his/her view. Such multi-user AR systems should be able to detect the human keypoints and estimate device poses (for identifying different users) in the meantime. However, due to the stringent low latency requirements and the intensive computation of the preceding two capabilities, previous research only enables either of the two capabilities for mobile devices even with the aid of the edge server. Integrating the two capabilities is promising but non-trivial in terms of latency, accuracy, and matching. To fill this gap, we propose DiTing to achieve real-time ID-aware multi-device visual interaction for multi-user AR applications, which contains three key innovations: Shared On-device Tracking to merge the similar computation for optimized latency, Tightly Coupled Dual Pipeline to enhance the accuracy of each task through mutual assistance, and Body Affinity Particle Filter to precisely match device poses with human bodies. We implement DiTing on four types of mobile AR devices and develop a multi-user AR game as a case study. Extensive experiments show that DiTing can provide high-quality human keypoint detection and pose estimation in real time (30fps) for ID-aware multi-device interaction and outperform the state-of-the-art baseline approaches.
更多
查看译文
关键词
Multi-user AR interaction,human keypoint detection,pose estimation,edge computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要