Adapting SAM for Surgical Instrument Tracking and Segmentation in Endoscopic Submucosal Dissection Videos
arxiv(2024)
摘要
The precise tracking and segmentation of surgical instruments have led to a
remarkable enhancement in the efficiency of surgical procedures. However, the
challenge lies in achieving accurate segmentation of surgical instruments while
minimizing the need for manual annotation and reducing the time required for
the segmentation process. To tackle this, we propose a novel framework for
surgical instrument segmentation and tracking. Specifically, with a tiny subset
of frames for segmentation, we ensure accurate segmentation across the entire
surgical video. Our method adopts a two-stage approach to efficiently segment
videos. Initially, we utilize the Segment-Anything (SAM) model, which has been
fine-tuned using the Low-Rank Adaptation (LoRA) on the EndoVis17 Dataset. The
fine-tuned SAM model is applied to segment the initial frames of the video
accurately. Subsequently, we deploy the XMem++ tracking algorithm to follow the
annotated frames, thereby facilitating the segmentation of the entire video
sequence. This workflow enables us to precisely segment and track objects
within the video. Through extensive evaluation of the in-distribution dataset
(EndoVis17) and the out-of-distribution datasets (EndoVis18 & the endoscopic
submucosal dissection surgery (ESD) dataset), our framework demonstrates
exceptional accuracy and robustness, thus showcasing its potential to advance
the automated robotic-assisted surgery.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要