Demonstrating AHA: Boosting Unmodified AI's Robustness by Proactively Inducing Favorable Human Sensing Conditions

UbiComp/ISWC '23 Adjunct: Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing(2023)

引用 0|浏览2
暂无评分
摘要
Imagine a near-future smart home. Home-embedded visual AI sensors continuously monitor the resident, inferring her activities and internal states that enable higher-level services. Here, as home-embedded sensors passively monitor a free person, good inferences happen inconsistently. The inferences’ confidence highly depends on how congruent her momentary conditions are to the conditions favored by the AI models, e.g., front-facing or unobstructed. We envision new strategies of AI-to-Human Actuation (AHA) that boost the sensory AI’s robustness by inducing favorable conditions from the person with proactive actuations. To demonstrate our concept, in this demo, we show how the inference quality of the AI model changes relative to the person’s conditions and introduce possible actuations, used in our full paper experiments, that could drive more favorable conditions for visual AIs.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要