Learning Heterogeneous Relation Graph and Value Regularization Policy for Visual Navigation.

IEEE transactions on neural networks and learning systems(2023)

引用 0|浏览8
暂无评分
摘要
The goal of visual navigation is steering an agent to find a given target object with current observation. It is crucial to learn an informative visual representation and robust navigation policy in this task. Aiming to promote these two parts, we propose three complementary techniques, heterogeneous relation graph (HRG), a value regularized navigation policy (VRP), and gradient-based meta learning (ML). HRG integrates object relationships, including object semantic closeness and spatial directions, e.g., a knife is usually co-occurrence with bowl semantically or located at the left of the fork spatially. It improves visual representation learning. Both VRP and gradient-based ML improve robust navigation policy, regulating this process of the agent to escape from the deadlock states such as being stuck or looping. Specifically, gradient-based ML is a type of supervision method used in policy network training, which eliminates the gap between the seen and unseen environment distributions. In this process, VRP maximizes the transformation of the mutual information between visual observation and navigation policy, thus improving more informed navigation decisions. Our framework shows superior performance over the current state-of-the-art (SOTA) in terms of success rate and success weighted by length (SPL). Our HRG outperforms the Visual Genome knowledge graph on cross-scene generalization with ≈ 56% and ≈ 39% improvement on Hits@ 5 (proportion of correct entities ranked in top 5) and MRR (mean reciprocal rank), respectively. Our code and HRG datasets will be made publicly available in the scientific community.
更多
查看译文
关键词
Knowledge graph,meta learning (ML),reinforcement learning (RL),value regularization policy,visual navigation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要