Towards Semantic Consistency: Dirichlet Energy Driven Robust Multi-Modal Entity Alignment
CoRR(2024)
摘要
In Multi-Modal Knowledge Graphs (MMKGs), Multi-Modal Entity Alignment (MMEA)
is crucial for identifying identical entities across diverse modal attributes.
However, semantic inconsistency, mainly due to missing modal attributes, poses
a significant challenge. Traditional approaches rely on attribute
interpolation, but this often introduces modality noise, distorting the
original semantics. Moreover, the lack of a universal theoretical framework
limits advancements in achieving semantic consistency. This study introduces a
novel approach, DESAlign, which addresses these issues by applying a
theoretical framework based on Dirichlet energy to ensure semantic consistency.
We discover that semantic inconsistency leads to model overfitting to modality
noise, causing performance fluctuations, particularly when modalities are
missing. DESAlign innovatively combats over-smoothing and interpolates absent
semantics using existing modalities. Our approach includes a multi-modal
knowledge graph learning strategy and a propagation technique that employs
existing semantic features to compensate for missing ones, providing explicit
Euler solutions. Comprehensive evaluations across 18 benchmarks, including
monolingual and bilingual scenarios, demonstrate that DESAlign surpasses
existing methods, setting a new standard in performance. Further testing on 42
benchmarks with high rates of missing modalities confirms its robustness,
offering an effective solution to semantic inconsistency in real-world MMKGs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要