An Empirical Evaluation of Word Embedding Models for Subjectivity Analysis Tasks

2021 International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT)(2021)

Cited 4|Views3
No score
Abstract
It is a clearly established fact that good categorization results are heavily dependent on representation techniques. Text representation is a necessity that must be fulfilled before working on any text analysis task since it creates a baseline which even advanced machine learning models fail to compensate. This paper aims to comprehensively analyze and quantitatively evaluate the various models to represent text in order to perform Subjectivity Analysis. We implement a diverse array of models on the Cornell Subjectivity Dataset. It is worth noting that the BERT Language Model gives much better results than any other model but is significantly computationally expensive than the other approaches. We obtained state-of-the-art results on the subjectivity task by fine-tuning the BERT Language Model. This can open up a lot of new avenues and potentially lead to a specialized model inspired by BERT dedicated to subjectivity analysis.
More
Translated text
Key words
Natural language processing,Word Embeddings,Language Modelling,Subjectivity Analysis,Sentiment Analysis
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined