Let’s explain crisis: deep multi-scale hierarchical attention framework for crisis-task identification

The Journal of Supercomputing(2024)

引用 0|浏览0
暂无评分
摘要
Emergency services rely heavily on Twitter for early detection of crisis tasks to enhance crisis management systems. However, employing state-of-the-art models often face data sparsity as well as their inadequacy to handle long-range dependencies between tweet tokens. Additionally, the authorities need to gain confidence in the model’s prediction so that the detected task information can be better believed and prioritized. In this study, we present a generalized framework named explainable attentive model for crisis task identification (ExACT) to handle the above mentioned challenges, while identifying crisis task relevant tweets as well as provide the model explainability by utilizing a very small corpus of tweets. The novelty of ExACT is two-fold: (1) Data enrichment has been introduced by nondynamic contextual attributes derived from tweets to overcome the sparsity and improve data quality. (2) Feature enrichment has been incorporated using hierarchical attention at both local and global levels using residual self-attention and correlation attention to capture long-range dependencies. Additionally, LIME based explainability approach added to understand the task important tokens. Experiments reveal that ExACT has a competitive performance improvement over various state-of-the-art models in terms of F_1 -score ( 20% and 14% respectively) and accuracy ( 14% and 16% , respectively) across two different crisis tasks infrastructure damage and support signal identification. Consistent performance improvement for two different tasks considered from publicly available crisis event datasets depicts the model’s generalizability. While, LIME supported explainable mechanism in ExACT can identify the important keywords but does not guarantee a high score in terms of plausibility and faithfulness metrics.
更多
查看译文
关键词
Crisis task,Data enrichment,Feature enrichment,Transformer,Residual attention,Explainability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要