PFPs: Prompt-guided Flexible Pathological Segmentation for Diverse Potential Outcomes Using Large Vision and Language Models
arxiv(2024)
摘要
The Vision Foundation Model has recently gained attention in medical image
analysis. Its zero-shot learning capabilities accelerate AI deployment and
enhance the generalizability of clinical applications. However, segmenting
pathological images presents a special focus on the flexibility of segmentation
targets. For instance, a single click on a Whole Slide Image (WSI) could
signify a cell, a functional unit, or layers, adding layers of complexity to
the segmentation tasks. Current models primarily predict potential outcomes but
lack the flexibility needed for physician input. In this paper, we explore the
potential of enhancing segmentation model flexibility by introducing various
task prompts through a Large Language Model (LLM) alongside traditional task
tokens. Our contribution is in four-fold: (1) we construct a
computational-efficient pipeline that uses finetuned language prompts to guide
flexible multi-class segmentation; (2) We compare segmentation performance with
fixed prompts against free-text; (3) We design a multi-task kidney pathology
segmentation dataset and the corresponding various free-text prompts; and (4)
We evaluate our approach on the kidney pathology dataset, assessing its
capacity to new cases during inference.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要