On the effectiveness of small, discriminatively pre-trained language representation models for biomedical text mining

SDP@EMNLP(2020)

引用 20|浏览455
暂无评分
摘要
Neural language representation models such as BERT [] have recently shown state of the art performance in downstream NLP tasks and bio-medical domain adaptation of BERT (Bio-BERT []) has shown same behavior on biomedical text mining tasks. However, due to their large model size and resulting increased computational need, practical application of models such as BERT is challenging making smaller models with comparable performance desirable for real word applications. Recently, a new language transformers based language representation model named ELECTRA [] is introduced, that makes efficient usage of training data in a generative-discriminative neural model setting that shows performance gains over BERT. These gains are especially impressive for smaller models. Here, we introduce a small ELECTRA based model named Bio-ELECTRA that is eight times smaller than BERT BASE and achieves comparable performance on biomedical question answering and yes/no question answer classification tasks. The model is pre-trained from scratch on PubMed abstracts using a consumer grade GPU with only 8GB memory. For biomedical named entity recognition, however, large BERT Base model outperforms both Bio-ELECTRA and ELECTRA-Small++.
更多
查看译文
关键词
biomedical text mining,language representation,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要