Chrome Extension
WeChat Mini Program
Use on ChatGLM

On the effectiveness of small, discriminatively pre-trained language representation models for biomedical text mining

SDP@EMNLP(2020)

Cited 23|Views457
No score
Abstract
Neural language representation models such as BERT [] have recently shown state of the art performance in downstream NLP tasks and bio-medical domain adaptation of BERT (Bio-BERT []) has shown same behavior on biomedical text mining tasks. However, due to their large model size and resulting increased computational need, practical application of models such as BERT is challenging making smaller models with comparable performance desirable for real word applications. Recently, a new language transformers based language representation model named ELECTRA [] is introduced, that makes efficient usage of training data in a generative-discriminative neural model setting that shows performance gains over BERT. These gains are especially impressive for smaller models. Here, we introduce a small ELECTRA based model named Bio-ELECTRA that is eight times smaller than BERT BASE and achieves comparable performance on biomedical question answering and yes/no question answer classification tasks. The model is pre-trained from scratch on PubMed abstracts using a consumer grade GPU with only 8GB memory. For biomedical named entity recognition, however, large BERT Base model outperforms both Bio-ELECTRA and ELECTRA-Small++.
More
Translated text
Key words
biomedical text mining,language representation,deep learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined