Style-Extracting Diffusion Models for Semi-Supervised Histopathology Segmentation
arxiv(2024)
摘要
Deep learning-based image generation has seen significant advancements with
diffusion models, notably improving the quality of generated images. Despite
these developments, generating images with unseen characteristics beneficial
for downstream tasks has received limited attention. To bridge this gap, we
propose Style-Extracting Diffusion Models, featuring two conditioning
mechanisms. Specifically, we utilize 1) a style conditioning mechanism which
allows to inject style information of previously unseen images during image
generation and 2) a content conditioning which can be targeted to a downstream
task, e.g., layout for segmentation. We introduce a trainable style encoder to
extract style information from images, and an aggregation block that merges
style information from multiple style inputs. This architecture enables the
generation of images with unseen styles in a zero-shot manner, by leveraging
styles from unseen images, resulting in more diverse generations. In this work,
we use the image layout as target condition and first show the capability of
our method on a natural image dataset as a proof-of-concept. We further
demonstrate its versatility in histopathology, where we combine prior knowledge
about tissue composition and unannotated data to create diverse synthetic
images with known layouts. This allows us to generate additional synthetic data
to train a segmentation network in a semi-supervised fashion. We verify the
added value of the generated images by showing improved segmentation results
and lower performance variability between patients when synthetic images are
included during segmentation training. Our code will be made publicly available
at [LINK].
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要