论文部分内容阅读
Language model pre-training has proven to be useful in learning universal language representations.As a state-of-the-art language model pre-training model,BERT(Bidirectional Encoder Representations from Transformers)has achieved amazing results in many language understanding tasks.In this paper,we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT finetuning.Finally,the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.