“…Besides their strong empirical results on most real-world problems, such as summarization (Zhang et al, 2020;Xiao et al, 2021a), question-answering (Joshi et al, 2020;Oguz et al, 2021) and sentiment analysis (Adhikari et al, 2019;, uncovering what kind of linguistic knowledge is captured by this new type of pre-trained language models (PLMs) has become a prominent question by itself. As part of this line of research, called BERTology (Rogers et al, 2020), researchers explore the amount of linguistic understanding encapsulated in PLMs, exposed through either external probing tasks (Raganato and Tiedemann, 2018;Zhu et al, 2020;Koto et al, 2021a) or unsupervised methods (Wu et al, 2020;Pandia et al, 2021) to analyze the syntactic structures (e.g., Hewitt and Manning (2019); Wu et al (2020)), relations (Papanikolaou et al, 2019), ontologies (Michael et al, 2020) and, to a more limited extend, discourse related behaviour (Zhu et al, 2020;Koto et al, 2021a;Pandia et al, 2021).…”