Multi-Modal Autonomous Ultrasound Scanning for Efficient Human–Machine Fusion Interaction

IEEE Transactions on Automation Science and Engineering(2024)

引用 0|浏览8
暂无评分
摘要
Robotic autonomous ultrasound imaging is a challenging task as robots require strong analytical capabilities to make sound decisions in complex spatial relationships. In this paper, we integrate visual and tactile information into the ultrasound robotic system drawing inspiration from the process of human doctors conducting ultrasound scans, and explore the impact of different modalities of information on our task. The proposed multimodal deep reinforcement learning (DRL) framework can integrate real-time visual feedback and tactile perception, and directly output 6D pose decisions to control the ultrasound probe, thereby achieving fully autonomous ultrasound imaging of soft, movable, and unmarked targets. We demonstrate the feasibility of our method on a simulation platform and propose an effective model transfer learning method. Subsequently, we conducted further evaluations of the approach in a real-world environment. The results indicate that our approach effectively enhances the performance of autonomous ultrasound scanning and manual adjustments further optimize the outcomes. Note to Practitioners —This work is motivated by the increasing demand for intelligent human-machine interaction in medical applications. By improving the automation of traditional medical scanning procedures such as ultrasound scanning, the efficiency of medical scanning can be greatly improved. In this work, we propose a multi-modal autonomous ultrasound scanning system based on DRL, which can be applied to improve the efficiency of human-machine interaction in medical environments to execute daily health screening or used in emergency situations.
更多
查看译文
关键词
Autonomous ultrasound scanning,deep reinforcement learning,multimodal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要