“…When moving to new types, domains or languages, we have to start from scratch by creating annotations and re-training the extraction models. In this part of tutorial, we will cover the recent advances in improving the transferability of IE, including (1) cross-lingual transfer by leveraging adversarial training (Chen et al, 2019a;Huang et al, 2019;Zhou et al, 2019), language-invariant representations (Huang et al, 2018a;Subburathinam et al, 2019) and resources (Tsai et al, 2016;Pan et al, 2017), pre-trained multilingual language models (Wu and Dredze, 2019;Conneau et al, 2020) as well as data projection (Ni et al, 2017;Yarmohammadi et al, 2021), (2) cross-type transfer including zero-shot and few-shot IE by learning prototypes (Huang et al, 2018b;Chan et al, 2019;Huang and Ji, 2020), reading the definitions (Chen et al, 2020b;Logeswaran et al, 2019;Obeidat et al, 2019;Yu et al, 2022;Wang et al, 2022a), answering questions (Levy et al, 2017;Lyu et al, 2021), and (3) transfer across different benchmark datasets (Xia and Van Durme, 2021;. Finally, we will also discuss the progress on life-long learning for IE Cao et al, 2020;Yu et al, 2021;Liu et al, 2022) to enable knowledge transfer across incrementally updated models.…”