What's Fair is Fair: Detecting and Mitigating Encoded Bias in Multimodal Models of Museum Visitor Attention

ICMI-MLMI(2021)

引用 0|浏览16
暂无评分
摘要
ABSTRACTRecent years have seen growing interest in modeling visitor engagement in museums with multimodal learning analytics. In parallel, there has also been growing concern about issues of fairness and encoded bias in machine learning models. In this paper, we investigate bias detection and mitigation techniques to address issues of algorithmic fairness in multimodal models of museum visitor visual attention. We employ slicing analysis using the Absolute Between-ROC Area (ABROCA) statistic to detect encoded bias present in multimodal models of visitor visual attention trained with facial expression and posture data from visitor interactions with a game-based museum exhibit about environmental sustainability. We investigate instances of gender bias that arise between different combinations of modalities across several machine learning techniques. We also measure the effectiveness of two different debiasing strategies—learned fair representations and reweighing—when applied to the trained multimodal visitor attention models. Results indicate that patterns of bias can arise across different modality combinations for the different visitor visual attention models, and there is often an inherent tradeoff between predictive accuracy and ABROCA. Analyses suggest that debiasing strategies tend to be more effective on multimodal models of visitor visual attention than their unimodal counterparts
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要