One of the most significant aims of natural language processing is automatic knowledge extraction. The amount of COVID-19 literature is increasing by ten thousand each month, which greatly complicates manual annotation and downstream activities. In this paper, we describe a system for biomedical multic-label topic classification. Firstly, BERT is pre-trained on PMC and PubMed biomedical corpora which helps to capture deep semantic information. Additionally, we fine-tune the pre-trained BERT using the COVID-19 literature from the LitCovid Database. Finally, we predict the topic of LitCovid scientific literature using the novel model. The experimental results of our model on the BioCreative LitCovid corpus achieves a micro F-score of 91.14%, which is 1.29 percentage points higher than BERT. The F-scores of the our model are 1.33%, 2.32%, 0.27%, 0.44%, 6.91%, 14.14% higher than BERT on Treatment, Diagnosis, Prevention, Mechanism, Transmission, Epidemic Forecasting topics respectively, which demonstrates the potential and effectiveness of the proposed framework.