Chrome Extension
WeChat Mini Program
Use on ChatGLM

Based on The Improved BertForSequence-Classification Method Social Text Classification Topic Analysis

2022 2nd International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI)(2022)

Cited 0|Views4
No score
Abstract
Text classification based on Bert Model has recently attracted much attention from researchers. Many adversarial training methods (e.g., PGM and PGD) also exist to increase the model robustness to improve the accuracy of the corresponding task. In this paper, We improve the traditional BertForSequence-Classification model and use freelb adversarial training to make it more robust. In contrast to the BertForSequence-Classification model, we add three linear layers and two softmax layers after the Bert Model. We chose among the datasets for two Twitter text classification tasks. The improvement of our improved model for this task is more obvious. In particular, we steadily improve the accuracy of the task again after using adversarial training. From the experimental results, we find that the more labels we classify, the greater the improvement in accuracy we have over the traditional model. This process do not only improve the task for our chosen dataset, but can also be extended to the whole task of text classification.
More
Translated text
Key words
BERT,adversarial training,Twitter,improve
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined