谷歌浏览器插件
订阅小程序
在清言上使用

PVSPE: A Pyramid Vision Multitask Transformer Network for Spacecraft Pose Estimation

Advances in Space Research(2024)

引用 0|浏览4
暂无评分
摘要
Spacecraft pose estimation (SPE) plays a vital role in the relative navigation system for on-orbit servicing and active debris removal. Current deep learning-based methods have made great achievements on object pose estimation. However, towards the challenging onboard SPE missions, most existing Convolutional Neural Network (CNN) methods failed to capture remote vision attention, leading to the reduction of accuracy and robustness. In this paper, we presented an end-to-end multi-task Pyramid Transformer SPE network (PVSPE) consisting of two novel feature extraction modules: EnhancedPVT (EnPVT) and SlimGFPN. The EnPVT module is designed to combine global spatial and channel attention, while the Slim GFPN module can fuse features more effectively. Matrix Fisher and multivariate Gaussian distributions are further employed to model the uncertainty of pose regression to increase its accuracy. Extensive experiments are carried out on challenging SPEED+ and SHIRT datasets, to validate the performances on pose estimation and vision-based navigation, respectively. The results show that the proposed PVSPE model achieved high accuracy for SPE on the SPEED+ dataset even under different scales and severe illumination, demonstrating its robustness and high generalization. Leveraging the insightful uncertainty model of PVSPE, the vision-based navigation pipeline, combined with Kalman filters, accurately estimated the satellite pose under challenging rendezvous scenarios on the SHIRT dataset, with degree-level attitude errors and centimeter-level translation accuracy at steady-state.
更多
查看译文
关键词
Spacecraft pose estimation,Computer vision,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要