2022
DOI: 10.4000/ijcol.965
|View full text |Cite
|
Sign up to set email alerts
|

Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties

Abstract: In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the transformer models currently available for the Italian language. In particular, we investigate how the complexity of two different architectures of probing models affects the performance of the Transformers in encoding a wide spectrum of linguistic features. Moreover, we explore how this implicit knowledge varies according to different textual genres and language varieties.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 28 publications
0
1
0
Order By: Relevance
“…Since previous work already showed the ability of pre-trained NLMs to outperform simple baselines (e.g. linear model trained using only sentence length as input feature) in the resolution of probing tasks [51], in this current paper we did not perform a direct comparison with a baseline. Nevertheless, since the focus of this work is on assessing the sensitivity of BERT to distorted feature values, control datasets can be viewed as a baseline themselves.…”
Section: Modelsmentioning
confidence: 99%
“…Since previous work already showed the ability of pre-trained NLMs to outperform simple baselines (e.g. linear model trained using only sentence length as input feature) in the resolution of probing tasks [51], in this current paper we did not perform a direct comparison with a baseline. Nevertheless, since the focus of this work is on assessing the sensitivity of BERT to distorted feature values, control datasets can be viewed as a baseline themselves.…”
Section: Modelsmentioning
confidence: 99%