DBATES: Dataset for Discerning Benefits of Audio, Textual, and Facial Expression Features in Competitive Debate Speeches

IEEE Transactions on Affective Computing(2023)

引用 1|浏览4
暂无评分
摘要
In this article, we present a database of multimodal communication features extracted from debate speeches in the 2019 North American Universities Debate Championships (NAUDC). Feature sets were extracted from the visual (facial expression, gaze, and head pose), audio (PRAAT), and textual (word sentiment and linguistic category) modalities of raw video recordings of competitive collegiate debaters (N=716 6-minute recordings from 140 unique debaters). Each speech has an associated competition debate score (range: 67-96) from experienced judges as well as competitor demographic and per-round reflection surveys. We observe the fully multimodal model performs best in comparison to models trained on various compositions of individual modalities. We also find that the weights of some features (such as the expression of joy and the use of the word "we") change in direction between the aforementioned models. We use these results to highlight the value of a multimodal dataset for studying competitive, collegiate debate.
更多
查看译文
关键词
Government,Feature extraction,Visualization,Irrigation,Video recording,Cameras,Annotations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要