Chrome Extension
WeChat Mini Program
Use on ChatGLM

Con-S2V: A Generic Framework for Incorporating Extra-Sentential Context into Sen2Vec.

Lecture Notes in Artificial Intelligence(2017)

Cited 8|Views36
No score
Abstract
We present a novel approach to learn distributed representation of sentences from unlabeled data by modeling both content and context of a sentence. The content model learns sentence representation by predicting its words. On the other hand, the context model comprises a neighbor prediction component and a regularizer to model distributional and proximity hypotheses, respectively. We propose an online algorithm to train the model components jointly. We evaluate the models in a setup, where contextual information is available. The experimental results on tasks involving classification, clustering, and ranking of sentences show that our model outperforms the best existing models by a wide margin across multiple datasets. Code related to this chapter is available at: https://github.com/tksaha/con-s2v/tree/jointlearning Data related to this chapter are available at: https://www.dropbox.com/sh/ruhsi3c0unn0nko/AAAgVnZpojvXx9loQ21WP_MYa?dl=0
More
Translated text
Key words
Sen2Vec,Extra-sentential context,Embedding of sentences
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined