基本信息
views: 45
![](https://originalfileserver.aminer.cn/sys/aminer/icon/show-trajectory.png)
Bio
I am a researcher at JD AI Research working on natural language processing and machine learning. If you are interested in exploring research opportunities with us (internship or full-time), don’t hesitate to reach out!
Before joining JD, I obtained my Ph.D. in Computer Science at Stanford University advised by Prof. Chris Manning, where I was a member of the natural language processing group.
My research goal is to build explainable machine learning systems to help us solve problems efficiently using textual knowledge. I believe that AI systems should be able to explain their computational decisions in a human-understandable manner, so as to build trust in their application to real-world problems. To this end, I have recently been working on natural language processing (NLP) techniques that help us answer complex questions from textual knowledge through explainable multi-step reasoning, as well as models that reason pragmatically about the knowledge of their interlocutors for efficient communication in dialogues, among others.
Outside of NLP research, I am broadly interested in presenting data in a more understandable manner, making technology appear less boring (to students, for example), and processing data with more efficient computation. I have also worked on speech recognition and computer vision previously.
When I procrastinate in my research life, I write code for Stanza, a natural language processing toolkit that’s available for a few dozen (human) languages, written in Python.
Before joining JD, I obtained my Ph.D. in Computer Science at Stanford University advised by Prof. Chris Manning, where I was a member of the natural language processing group.
My research goal is to build explainable machine learning systems to help us solve problems efficiently using textual knowledge. I believe that AI systems should be able to explain their computational decisions in a human-understandable manner, so as to build trust in their application to real-world problems. To this end, I have recently been working on natural language processing (NLP) techniques that help us answer complex questions from textual knowledge through explainable multi-step reasoning, as well as models that reason pragmatically about the knowledge of their interlocutors for efficient communication in dialogues, among others.
Outside of NLP research, I am broadly interested in presenting data in a more understandable manner, making technology appear less boring (to students, for example), and processing data with more efficient computation. I have also worked on speech recognition and computer vision previously.
When I procrastinate in my research life, I write code for Stanza, a natural language processing toolkit that’s available for a few dozen (human) languages, written in Python.
Research Interests
Papers共 50 篇Author StatisticsCo-AuthorSimilar Experts
By YearBy Citation主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
Rujun Han,Yuhao Zhang,Peng Qi, Yumo Xu, Jenyuan Wang,Lan Liu,William Yang Wang,Bonan Min,Vittorio Castelli
arxiv(2024)
Cited0Views0Bibtex
0
0
Cited1Views0EIBibtex
1
0
CoRR (2023): 13300-13310
arXiv (Cornell University) (2023)
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS) (2022): 8017-8026
CoRR (2022)
Load More
Author Statistics
Co-Author
Co-Institution
D-Core
- 合作者
- 学生
- 导师
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn