Chrome Extension
WeChat Mini Program
Use on ChatGLM

PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data

arXiv (Cornell University)(2020)

Cited 0|Views0
No score
Abstract
In natural language processing (NLP), there is a need for more resources in Portuguese, since much of the data used in the state-of-the-art research is in other languages. In this paper, we pretrain a T5 model on the BrWac corpus, an extensive collection of web pages in Portuguese, and evaluate its performance against other Portuguese pretrained models and multilingual models on the sentence similarity and sentence entailment tasks. We show that our Portuguese pretrained models have significantly better performance over the original T5 models. Moreover, we showcase the positive impact of using a Portuguese vocabulary.
More
Translated text
Key words
ptt5 model,data
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined