BoW-based neural networks vs. cutting-edge models for single-label text classification

Neural Comput. Appl.(2023)

引用 4|浏览1
暂无评分
摘要
To reliably and accurately classify complicated "big" datasets, machine learning models must be continually improved. This research proposes straightforward yet competitive neural networks for text classification, even though graph neural networks (GNN) have reignited interest in graph-based text classification models. Convolutional neural networks (CNN), artificial neural networks (ANN), and their refined “fine-tuned” models (denoted as FT-CNN and FT-ANN) are the names given to our proposed models. The models presented in this paper demonstrate that our simple models like (CNN, ANN, FT-CNN, and FT-ANN) can perform better than more complex GNN ones such as (SGC, SSGC, and TextGCN) and are comparable to others (i.e., HyperGAT and Bert). The process of fine-tuning is also highly recommended because it improves the performance and reliability of models. The performance of our suggested models on five benchmark datasets (namely, Reuters (R8), R52, 20NewsGroup, Ohsumed, and Mr) is vividly illustrated. According to the experimental findings, on the majority of the target datasets, these models—especially those that have been fine-tuned—perform surprisingly better than SOTA approaches, including GNN-based models.
更多
查看译文
关键词
Data mining,Text classification,Neural networks,Machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要