LogicalDefender: Discovering, Extracting, and Utilizing Common-Sense Knowledge
CoRR(2024)
Abstract
Large text-to-image models have achieved astonishing performance in
synthesizing diverse and high-quality images guided by texts. With
detail-oriented conditioning control, even finer-grained spatial control can be
achieved. However, some generated images still appear unreasonable, even with
plentiful object features and a harmonious style. In this paper, we delve into
the underlying causes and find that deep-level logical information, serving as
common-sense knowledge, plays a significant role in understanding and
processing images. Nonetheless, almost all models have neglected the importance
of logical relations in images, resulting in poor performance in this aspect.
Following this observation, we propose LogicalDefender, which combines images
with the logical knowledge already summarized by humans in text. This
encourages models to learn logical knowledge faster and better, and
concurrently, extracts the widely applicable logical knowledge from both images
and human knowledge. Experiments show that our model has achieved better
logical performance, and the extracted logical knowledge can be effectively
applied to other scenarios.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined