谷歌Chrome浏览器插件
订阅小程序
在清言上使用

Face information used to classify identity depends on emotional expression and vice-versa

Emily T. Martin,Jason Hays,Fabián A. Soto

Journal of Vision(2023)

引用 0|浏览1
暂无评分
摘要
Every day we categorize new faces according to dimensions such as identity and emotional expression, using specific face information that can be summarized in what is known as a template. Empirically recovering these templates grants us a richer understanding of the perceptual representation of visual stimuli. Using reverse correlation, a psychophysical technique that estimates these templates from participants’ decisions when presented with noisy stimuli, we identified face features significant in the perception of identity and expression. More importantly, we also assessed invariance at the level of these templates (i.e., template separability); that is, whether the face information used to identify levels of one dimension (e.g., identity) does not vary with changes in the other dimension (e.g., expression). Previous studies have superimposed noise on pixel luminance, which constrains interpretation to the pixel space rather than face space. Alternatively, we used a three-dimensional face modeling toolbox (FaReT) that allows for manipulation and recovery of significant face shape features rather than image pixels. Our new approach allows us to directly visualize interactions between identity and expression by rendering face models that highlight how face features are sampled differently with changes in an irrelevant dimension. Permutation tests found significant violations of template separability for identity and expression across all groups, suggesting a strong interaction between dimensions at the level of the face information sampled for recognition.
更多
查看译文
关键词
emotional expression,identity,face,information,vice-versa
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要