“…Since the models span several decades, they present an interesting view of words over time, useful for researchers interested in diachronic studies such as culturomics (Michel et al, 2011), semantic change (see Tahmasebi et al (2018); Kutuzov et al (2018), for overviews), historical research (van Eijnatten & Ros, 2019;Hengchen et al, 2021a;Marjanen et al, 2020), etc. They also can be further fed as input to more complex neural networks tackling downstream tasks aimed at historical data such as OCR post-correction (Hämäläinen & Hengchen, 2019;Duong et al, 2020) or more linguistics-oriented problems (Budts, 2020). Since we release the whole models and not solely the learned vectors, these models can be further trained and specialised, or used by NLP researchers to compare different space alignment procedures.…”