In-Context Matting
CVPR 2024(2024)
摘要
We introduce in-context matting, a novel task setting of image matting. Given
a reference image of a certain foreground and guided priors such as points,
scribbles, and masks, in-context matting enables automatic alpha estimation on
a batch of target images of the same foreground category, without additional
auxiliary input. This setting marries good performance in auxiliary input-based
matting and ease of use in automatic matting, which finds a good trade-off
between customization and automation. To overcome the key challenge of accurate
foreground matching, we introduce IconMatting, an in-context matting model
built upon a pre-trained text-to-image diffusion model. Conditioned on inter-
and intra-similarity matching, IconMatting can make full use of reference
context to generate accurate target alpha mattes. To benchmark the task, we
also introduce a novel testing dataset ICM-57, covering 57 groups of
real-world images. Quantitative and qualitative results on the ICM-57 testing
set show that IconMatting rivals the accuracy of trimap-based matting while
retaining the automation level akin to automatic matting. Code is available at
https://github.com/tiny-smart/in-context-matting
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要