The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks 2023
DOI: 10.18653/v1/2023.bionlp-1.1
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Source (Pre-)Training for Cross-Domain Measurement, Unit and Context Extraction

Abstract: We present a cross-domain approach for automated measurement and context extraction based on pre-trained language models. We construct a multi-source, multi-domain corpus and train an end-to-end extraction pipeline. We then apply multi-source task-adaptive pre-training and fine-tuning to benchmark the cross-domain generalization capability of our model. Further, we conceptualize and apply a task-specific error analysis and derive insights for future work. Our results suggest that multi-source training leads to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 24 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?