基本信息
views: 0
![](https://originalfileserver.aminer.cn/sys/aminer/icon/show-trajectory.png)
Bio
My research interests lie at the intersection of computer vision and natural language processing. I believe that, like humans (and other animals), AI systems should have a holistic understanding of the world around them. This means working with multiple sensory modalities, among which vision and language arise as particularly interesting. On one hand, they are complementary: vision is a low-level perceptual modality, while language is an abstract human construct. On the other hand, they are believed to be two essential modalities for solving AI-complete problems.
I am generally interested in multimodal vision-language generative models, i.e. models capable of generating images and/or text conditioned on multimodal inputs. Generating new content requires learning and composing patterns from existing data, i.e. modeling the underlying data distribution. When this data represents the real world, generative models become effective “world models”. This idea has numerous applications. For example, text-conditioned image generation models can synthesize data on demand for training recognition/representation learning models on new tasks/skills. Furthermore, given the semantic and compositional nature of language, (large) language models can serve as reasoning engines. By aligning language models with vision encoders, we can build powerful multimodal systems capable of both perceiving and reasoning, which can be deployed as multimodal assistants (e.g. to aid visually-impaired users).
I am generally interested in multimodal vision-language generative models, i.e. models capable of generating images and/or text conditioned on multimodal inputs. Generating new content requires learning and composing patterns from existing data, i.e. modeling the underlying data distribution. When this data represents the real world, generative models become effective “world models”. This idea has numerous applications. For example, text-conditioned image generation models can synthesize data on demand for training recognition/representation learning models on new tasks/skills. Furthermore, given the semantic and compositional nature of language, (large) language models can serve as reasoning engines. By aligning language models with vision encoders, we can build powerful multimodal systems capable of both perceiving and reasoning, which can be deployed as multimodal assistants (e.g. to aid visually-impaired users).
Research Interests
Papers共 6 篇Author StatisticsCo-AuthorSimilar Experts
By YearBy Citation主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
CoRR (2024)
Cited0Views0EIBibtex
0
0
arxiv(2024)
Cited0Views0Bibtex
0
0
AAAI 2024no. 5 (2024): 4171-4179
17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023pp.2523-2548, (2023)
Author Statistics
Co-Author
Co-Institution
D-Core
- 合作者
- 学生
- 导师
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn