基本信息
浏览量:2
职业迁徙
个人简介
My research focuses on Natural Language Processing, Conversational AI, and Human-Computer Interaction. I'm currently interested in:
Natural Language Interface: Although large language models (LLMs) can follow user-initiative instructions very well, they usually do not lead the conversations like some system-initiative models. How can we design mixed-initiative interactions for LLM-based natural language interface to proactively resolve ambiguity, collect preferences, and reason about implication? How to reduce prompt engineering efforts for nonprofessional users? Finally, when would users prefer textual interactions over graphical interactions?
Continual Learning: The development of natural language interfaces is never a one-time effort. After the initial depolyment, how can we teach models to solve new problems via trial-and-error and learn from their mistakes like humans? How to adapt models to novel language uses, especially those not covered in the training corpus?
Reliability and Explainability: While LLMs are powerful, they also make silly mistakes here and there. How can we enable models to better calibrate their predictions and explain them in natural language? How to prevent LLMs from giving bad responses (e.g. misinformation, hallucination, social bias)?
Natural Language Interface: Although large language models (LLMs) can follow user-initiative instructions very well, they usually do not lead the conversations like some system-initiative models. How can we design mixed-initiative interactions for LLM-based natural language interface to proactively resolve ambiguity, collect preferences, and reason about implication? How to reduce prompt engineering efforts for nonprofessional users? Finally, when would users prefer textual interactions over graphical interactions?
Continual Learning: The development of natural language interfaces is never a one-time effort. After the initial depolyment, how can we teach models to solve new problems via trial-and-error and learn from their mistakes like humans? How to adapt models to novel language uses, especially those not covered in the training corpus?
Reliability and Explainability: While LLMs are powerful, they also make silly mistakes here and there. How can we enable models to better calibrate their predictions and explain them in natural language? How to prevent LLMs from giving bad responses (e.g. misinformation, hallucination, social bias)?
研究兴趣
论文共 5 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
Conference on Empirical Methods in Natural Language Processing (2023): 11730-11743
引用1浏览0EI引用
1
0
CoRR (2023): 5376-5393
引用2浏览0EI引用
2
0
作者统计
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn