谷歌浏览器插件
订阅小程序
在清言上使用

An Introduction of Deep Learning Based Word Representation Applied to Natural Language Processing

2019 International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI)(2019)

引用 3|浏览373
暂无评分
摘要
In the area of Natural Language Processing, high-quality word representations play key roles in neural language processing tasks. Recently, various model designs and methods have blossomed in the domain of word representation. In this paper, we will explain the theories of two major language models such as autoencoding (AE) and autoregressive (AR), and illustrate the architectures of several notable examples of AE and AR including ELMo, GPT, BERT and XLnet. By comparing the pros and cons of these models theoretically and the performances of these models in experiments, we will deepen the understanding of various learning methods and realize the trend in the development of language models.
更多
查看译文
关键词
Natural Language Processing,Word representation,Feature-extraction,Transformer,AttentionIntroduction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要