SafeCampus: Multimodal-Based Campus-Wide Pandemic Forecasting

IEEE Internet Computing(2022)

引用 1|浏览2
暂无评分
摘要
The motivation of this work is to build a multimodal-based COVID-19 pandemic forecasting platform for a large-scale academic institution to minimize the impact of COVID-19 after resuming academic activities. The design of this multimodality work is steered by video, audio, and tweets. Before conducting COVID-19 prediction, we first trained diverse models, including traditional machine learning models (e.g., Naive Bayes, support vector machine, and TF-IDF) and deep learning models [e.g., long short-term memory (LSTM), MobileNetV2, and SSD], to extract meaningful information from video, audio, and tweets by 1) detecting and counting face masks, 2) detecting and counting cough for potential infected cases, and 3) conducting sentiment analysis based on COVID-19-related tweets. Finally, we fed the multimodal analysis results together with daily confirmed cases data and social distancing metrics into the LSTM model to predict the daily increase rate of confirmed cases for the next week. Important observations with supporting evidence are presented.
更多
查看译文
关键词
COVID-19,Pervasive computing,Multimodal data,Predictive analytics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要