基本信息
浏览量:357
职业迁徙
个人简介
My research broadly explores how language can be used to structure visual perception. I work on machine learning approaches that enable tight coupling between how people express themselves in language and how machine behavior is specified. A central thread in my research is trying to understand how machine learning systems inherit human bias. My lab currently explores two main research themes around expanding the abilities artificial intelligence systems:
Natural language as a scaffold for visual intelligence
Natural language is an effective human tool for communicating important world knowledge. This knowledge can be extracted, and used to create explict priors for how visual recognition systems need to behave. Such systems can be more data-efficient, interpretable, and capture a wider range of human abilities.
Understanding the role of human biases in machine learning
Machine learning systems depend on human specification through explict annotation, collected data, and model design. In all parts of this process, people may unknownlingly bias systems and cause them to be brittle. In such cases, systems may fail to generalize given distribution shift, or cause the model to make gender biased predictions when models are uncertain. It is important to characterise and control how human biases are transfered to machine learning systems.
Natural language as a scaffold for visual intelligence
Natural language is an effective human tool for communicating important world knowledge. This knowledge can be extracted, and used to create explict priors for how visual recognition systems need to behave. Such systems can be more data-efficient, interpretable, and capture a wider range of human abilities.
Understanding the role of human biases in machine learning
Machine learning systems depend on human specification through explict annotation, collected data, and model design. In all parts of this process, people may unknownlingly bias systems and cause them to be brittle. In such cases, systems may fail to generalize given distribution shift, or cause the model to make gender biased predictions when models are uncertain. It is important to characterise and control how human biases are transfered to machine learning systems.
研究兴趣
论文共 45 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
CoRR (2024)
引用0浏览0EI引用
0
0
CoRR (2024)
引用0浏览0EI引用
0
0
CoRR (2024)
引用0浏览0EI引用
0
0
CVPR 2023 (2023): 19187-19197
加载更多
作者统计
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn