Representation learning for very short texts using weighted word embedding aggregation.
Pattern Recognition Letters(2016)
摘要
We create text representations by weighing word embeddings using idf information.A novel median-based loss is designed to mitigate the negative effect of outliers.A dataset of semantically related textual pairs from Wikipedia and Twitter is made.Our method outperforms all word embedding baselines in a semantic similarity task.Our method is out-of-the-box and thus requires no retraining in different contexts. Short text messages such as tweets are very noisy and sparse in their use of vocabulary. Traditional textual representations, such as tf-idf, have difficulty grasping the semantic meaning of such texts, which is important in applications such as event detection, opinion mining, news recommendation, etc. We constructed a method based on semantic word embeddings and frequency information to arrive at low-dimensional representations for short texts designed to capture semantic similarity. For this purpose we designed a weight-based model and a learning procedure based on a novel median-based loss function. This paper discusses the details of our model and the optimization methods, together with the experimental results on both Wikipedia and Twitter data. We find that our method outperforms the baseline approaches in the experiments, and that it generalizes well on different word embeddings without retraining. Our method is therefore capable of retaining most of the semantic information in the text, and is applicable out-of-the-box.
更多查看译文
关键词
Information storage and retrieval,Natural language processing,Artificial intelligence,Word embeddings,Representation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络