UoB at SemEval-2021 Task 5 - Extending Pre-Trained Language Models to Include Task and Domain-Specific Information for Toxic Span Prediction.

SemEval@ACL/IJCNLP(2021)

Cited 1|Views2
No score
Abstract
Toxicity is pervasive in social media and poses a major threat to the health of online communities. The recent introduction of pre-trained language models, which have achieved state-of-the-art results in many NLP tasks, has transformed the way in which we approach natural language processing. However, the inherent nature of pre-training means that they are unlikely to capture task-specific statistical information or learn domain-specific knowledge. Additionally, most implementations of these models typically do not employ conditional random fields, a method for simultaneous token classification. We show that these modifications can improve model performance on the Toxic Spans Detection task at SemEval-2021 to achieve a score within 4 percentage points of the top performing team.
More
Translated text
Key words
toxic span prediction,language models,pre-trained,domain-specific
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined